Join the Shiny Community every month at Shiny Gatherings

Neptune for MLOps: Better Data Science Tools


Neptune is an online tool for monitoring and storing your data science experiments in an orderly manner. It opens up easier avenues of monitoring and control for your MLOps, allowing you to be more productive with ML engineering and research.

It can be especially useful while working concurrently on several projects and testing different model settings. Neptune helps log diverse types of data concerning your models’ training and visualize experiment metadata as soon as training starts. With Neptune, you can organize results according to your specific needs and can compare multiple runs.

Table of contents:


How to set up Neptune for MLOps

You can set up Neptune quickly within 6 short steps:

  1. Register & log in
  2. Set your project
  3. Install Python 3.x
  4. Install Neptune: pip install neptune-client
  5. Include other dependencies (e.g., for fastai: pip install neptune-fastai)
  6. Connect the script to the Neptune logging board

Access the largest Computer Vision models library with PyTorch Image Model with fastai.

Note: you don’t have to add the api_token option if you added it already to your .bashrc file as the NEPTUNE_API_TOKEN variable.

 

What can you log in Neptune

After a proper setup, you can begin logging project data.

In the Appsilon Computer Vision and ML team, we use Neptune where we want to monitor all our models’ variations, their results, and code versions related to each run. It helps our team maintain our models in production reliably and efficiently.

But how do we log such data?

Keeping an eye out for Object Detection algorithms? See our introduction to YOLO Object Detection.

Well, if you have your script connected to Neptune you only need a simple, 1-liner command for every type of data.

Metrics / losses

 

Events files/model checkpoints – created during training

 

Images & other files

You can log various model-building metadata types with Neptune. Here’s a list of a few supported file types:

  • Standard image formats – png, jpg, gif
  • Matplotlib figures
  • PIL images
  • NumPy arrays
  • Tensorflow tensors
  • Pytorch tensors

Note: the difference between “log” and “upload” methods is that you ‘upload’ only one image, and ‘log’ a series of images.

 

Files, scripts

Neptune also logs some basic info about every run automatically:

  • System information (creation time, hostname, id, owner, running time, etc.)
  • Hardware consumption and console logs (CPU/GPU monitoring, error, and output logs from your console)
  • Git information tied with your script

Besides that, you can log also:

  • Model hyperparameters / configurations
  • Notebook code snapshot –  every time you run your code in Jupyter Notebook!
  • Data versions
  • DVC files
  • Python’s logger logs
  • Interactive visualizations (HTML files, bokeh plots, plotly plots)
  • Descriptions of the given run
  • Tags tied with a given run
  • Video files
  • Audio files

How to distinguish and compare runs on Neptune

Runs table

The basic type of view in Neptune is the runs table. Runs table stores rows describing every run in start time order. Like almost everything in Neptune, the table can be easily customized – a user can add/delete or change its columns and tags. You can also search or filter your runs and group them by customizable parameters.

 

Compare runs tab

Choose runs that you want to compare by clicking on the eye icon in the runs table. In doing so, you can see the comparison results in the compare runs tab. The tab is located on the left part of the screen.

Here you can see charts, parallel coordinates, and side-by-side parameter comparisons.

Besides that, Neptune is creating some auto-generated comparison dashboards for your runs. Neptune is a very customizable tool, so you can also create your own dashboards and save them for later!

 

Horizontal split tab of runs table and runs comparisons

This is the type of view, where your screen is split between the runs table and comparisons data. It’s convenient if you need to view them at the same time.

 

My experience with Neptune for MLOps – pros & cons

I have to admit that at first glance, I wasn’t a fan of the Neptune tool. I was using Tensorboard for a very long time and I couldn’t imagine using another tool for experiment tracking purposes. But the convenience of Neptune changed my opinion on this. I was able to set up and start logging data in just a few minutes. This without knowing the tool, and reading only a couple of lines from clean, concise documentation.

See how Appsilon uses AI for biodiversity conservation

Neptune vs Tensorboard for MLOps

Moreover, Neptune is a little bit different from the popular Tensorboard. Its application is not only for tracking and comparing experiments during training but also for storing every piece of info and data concerning a given run, which might be useful and crucial in the future.

Neptune is also very customizable and the user gets to decide how to use its options and capabilities. For me, the biggest advantage of Neptune is that it’s not dependent on some other libraries and modules. It’s a self-standing, easy-to-setup tool.

Pros

  • Easy to set up and start using compared with, for example, Tensorboard
  • Properly written with clear, concise documentation
  • Offline version
  • Customizable comparisons, runs table
  • Search and filtering options
  • With Neptune, one can create a “backup” for all experiments data in one place, in order to restore them one day
  • Trash option – from where deleted runs can be easily restored back to the runs table
  • Fast communication & support from the Neptune side
  • Possibility to track whole Jupyter notebooks
  • Independent of any other modules
  • Easy to share with someone outside of the project, by link
  • Not only for Python but also for R users
  • Integration options, for example with fastai, but it’s a material for another blogpost
  • You can watch your training even if you’re not next to the computer – for example, from the browser on your mobile phone 😉

Cons

  • The user needs to take care of synchronization between offline and online versions manually.
  • Not fully open-source, an individual version probably would be enough for private usage, but such access has per-month limits.
  • Runs comparisons could be more intuitive (I had to check in the documentation how exactly I should do it).
  • Some minor design issues are present, e.g., when comparing runs, the on-hover display of run ids fails when the ids are too long. 

Summary

Neptune is an excellent data science tool. It can be used more easily for experiment tracking and storing than other, similar tools, like Tensorboard. I recommend testing it out and seeing how you can improve your projects with Neptune for MLOps!