How It Works: Overview

The Process

Before coding up your first application, take a moment to familiarize yourself with the process of using the Nexosis API to make forecasts, predictions or measure impact.

We’ve worked hard to keep the high-level process simple. Here’s the basic process:

  1. Submit a dataset
  2. Start a model building session
  3. Retrieve results

Then optionally:

  1. Update your Dataset with additional data
  2. Start a new Session. Repeat.

[How It Works]

Submit a dataset

All Machine Learning processes must start with data and a question you are trying to gain insight into. This question could range from “How many robots will I sell next Tuesday?” to “What was the impact of our new marketing strategy on our Facebook likes for the month following our new marketing campaign?”

To use the Nexosis API, you must provide us with a Dataset. This Dataset is simply a series of related values. For example, it could be number of robot units sold each day for the last three years or the hourly number of website visitors for the last few months. It could contain attributes of many different houses - the number of rooms, square footage, zip code, year built. Along with this historical data, you can add additional data points. This is where business intuition, understanding, and creativity come into play. These additional data points could be a series of calendar events such as Black Friday, a promotion running at that store that week, a major sporting event on TV that afternoon, or heavy snow over the lunch hour.

The data submitted is very important, as it is used to discover relationships within the data using a host of algorithms. This discovery process happens during what we call a Session

Read Sending Data for more of the technical details on submitting data.

Start a model building session

A Session is simply the discovery process using the supplied Dataset.

There are several types of sessions you can initiate and each type of Session can help answer different types of questions, but all act on the datasets you provide.

  • Classification - when you want to predict what category something is in.
  • Regression - when you want to make a prediction on data based on the discovered relationships in the data.
  • Forecasting - when you want to forecast how something might change over time.
  • Impact Analysis - running a forecast over time a historical time to see the impact of a change.
  • Anomaly Detection - when you want to predict what category something is in.

This is where the data science happens at scale. Behind the scenes a host of algorithms will work to discover what makes your Dataset tick, attempting to find what factors are influential to others, where the correlations are and ultimately provide what’s called a mathematical Model. This Model is then used, and in many cases, stored permanently for you to reuse.

Read about Sessions for more technical details on building sessions.

Evaluate the results and make predictions

Once the session has completed successfully, the Nexosis API builds a custom mathematical Model, based the data that was used to build, or train that model. You can think of a model as a custom algorithm built based on the relationships found between the different data contained in the dataset. Depending on the type of session, the results will vary somewhat but generally will solve for a variable you want to predict, also called the target. Once the model is built, it can be evaluated and then used for predictions, forecasts, and to evaluate impact.

Read Retrieving a Session for more technical details.

Not sure what do do next? Next Steps is a good place to start.