There are typically four stages to building AI models. This often means using several different AI services in a single project. But Murmurate takes care of every stage, from start to finish. You can also just use Murmurate for specific tasks, and complete the rest elsewhere.
THE MAIN STAGES OF BUILDING AI ARE
THE DATA
You need a large dataset to use as training data, and a smaller test dataset. No datasets or not enough data? Murmurate can help you with that too.
LABELLING THE DATA
Most AI Models are trained on labelled datasets, so accurately labelling your data comes next.
THE MODEL
Next you’ll train a model with your labelled data. This teaches it what to look for on it’s own.
USING THE MODEL
Once trained, you can deploy your model on your live data, and monitor it’s performance
EACH STAGE INVOLVES CERTAIN TASKS
Within each stage of building AI are specific tasks such as connecting and exploring your data. Outlined below are the main stages, tasks and task options.
Stages
Tasks
Tasks Options
THE DATA
Upload a file
Connect to a database
Cloud storage
Combine datasets
Analyse and explore your data
Pre-Process; clean, transform, reduce
THE DATA
Upload a file
Connect to a database
Cloud storage
Combine datasets
Analyse and explore your data
Pre-Process; clean, transform, reduce
LABELLING THE DATA
Design your own labelling rules, called Labelling Functions
Use Murmurates labelling templates
View metrics
View visualisations
Make adjustments
LABELLING THE DATA
Design your own labelling rules, called Labelling Functions
Use Murmurates labelling templates
View metrics
View visualisations
Make adjustments
THE MODEL
Choose an AI model
Use your own AI model
View metrics
View visualisations
View metrics
View visualisations
THE MODEL
Choose an AI model
Use your own AI model
View metrics
View visualisations
View metrics
View visualisations
USING THE MODEL
Deploy the model; either on our servers, or on your own servers.
View metrics
View visualisations
USING THE MODEL
Deploy the model; either on our servers, or on your own servers.
View metrics
View visualisations
FLEXIBLE AND CUSTOMISABLE WORKFLOWS
In the Murmurate Project Studio, you can create a fully customised workflow for each of your AI projects.
Murmurate is flexible; you don’t have to complete every task with us. You might already have labelled data. Or your choosen type of AI model may not need it. You can repeat stages as many times as you like. For example, after deploying your model you might need to return to the labelling stage to make improvements. Or go back and explore your data again to work out better labelling rules.
With Murmurate you can iterate as many times as necessary to achieve the best results. The role of the evaluation tasks is to see where improvements can be made, and to go back and make adjustments to your data labels. This fine-tuning helps achieve the best results possible.
ESSENTIAL TASKS ONLY WORKFLOW
ESSENTIAL TASKS ONLY WORKFLOW
The data used in this workflow required no exploration or pre-processing. It was also not necessary to explain the model’s behaviour, nor monitor it’s performance, so these tasks were simply left out of the workflow.
ESSENTIAL TASKS ONLY WORKFLOW
The data used in this workflow required no exploration or pre-processing. It was also not necessary to explain the model’s behaviour, nor monitor it’s performance, so these tasks were simply left out of the workflow.
A TYPICAL END-TO-END WORKFLOW
A TYPICAL END-TO-END WORKFLOW
A typical workflow involves these tasks, with potentially returning to previously completed tasks to make improvements. This can be done as many times as necessary.
A TYPICAL END-TO-END WORKFLOW
A typical workflow involves these tasks, with potentially returning to previously completed tasks to make improvements. This can be done as many times as necessary.
NO DATA LABELLING WORKFLOW
NO DATA LABELLING WORKFLOW
In this example workflow, the data exploration and labelling have already been completed elsewhere, it’s possible to go straight to training the model. Its common that after a period of deployment, the model monitoring may detect a drop in performance. When this happens (typically due to data drift) its possible to connect fresh data and retrain the model.
NO DATA LABELLING WORKFLOW
In this example workflow, the data exploration and labelling have already been completed elsewhere, it’s possible to go straight to training the model. Its common that after a period of deployment, the model monitoring may detect a drop in performance. When this happens (typically due to data drift) its possible to connect fresh data and retrain the model.