FAQ on Machine Learning Forecasting

Straightforward answers to all of your ML forecast questions
finance
machine-learning
forecasting
Author

Mike Tokic

Published

August 12, 2024

Over the last few years I’ve presented to hundreds of people outside of Microsoft around how we approach machine learning (ML) forecasting within Microsoft finance. A lot of great questions were asked during those conversations. In this post I want to highlight some of the most commonly asked questions and my take on answering them. Hopefully this can be a quick reference for anyone ML curious or want to deepen the ML work being done on their teams. Use the table of contents to skip around to the sections you’re most interested in. If there are any topics missing please reach out to me via LinkedIn and I will continue to update this post.

FAQ Table of Contents

Data

Technical

Humans

Data

Getting high quality historical data

Garbage in, garbage out. That’s probably the most common saying in the world of ML. If you cannot get high quality historical data, then there is no easy way to produce an accurate ML forecast. You can’t work your way around noisy or incomplete data. If your data is messy, hard to find, and comes from 10 different systems, then ML is not something you should be worried about. A nice bow on top of a pile of crap is still a pile of crap. Fix your data first, then focus on ML after.

Back to Table of Contents

Using third party data

Your company’s business is most likely impacted by greater market forces outside of your control. For example the health of the economy or how much money customers have to spend. Adding data from outside of your company (third party data) as features in your ML models is a good way to improve forecast accuracy, while also being able to describe what outside forces impact your business the most. Some data is freely available, while others have to be paid for. What data you use is up to your domain knowledge of your business.

Free Data

Paid Data

Back to Table of Contents

New info not found in training data and one time events

If you are changing the price of your product in three months, this is most likely going to impact your future revenue forecast. But if you have never changed the price of your product before, then a ML model cannot learn from that information. For a model to learn from one time events, it needs to be present in the historical data a model is trained on. The future must always learn from the past.

Back to Table of Contents

Handling Outliers

Outliers in your data can have a bad impact on your future forecast. It can hurt accuracy or give false signals of the future. There are many ways to deal with outliers. The easiest way is to use statistical methods to identify and remove them. Treating them as a missing value you can then replace. Sometimes outliers are more subtle, and take a trained eye to spot them. This is where the domain expertise of a person comes into play.

Back to Table of Contents

Technical

Interpreting the black box

This one is a toughie. When a person creates a forecast manually, most often using excel, someone else can come into that financial model and trace cell by cell exactly what’s going on. Going from input data, to assumptions, to the formulas that create the final output. This is the kind of exactitude that allows accountants to sleep peacefully at night. Everything is in order and everything is perfectly understood. But we are not accountants. This is finance. We have to make calls about the future that are uncertain. We can never have 100% certainty that something is going to happen. If that’s the case then your company is doing something illegal. Get out now!

The biggest paradigm shift someone has to make with machine learning is giving up this total control of the forecast. And in essence take a leap of faith. Machine learning models are enigmas. Sometimes akin to magic. The cannot be perfectly understood because the capture non-linear relationships in data that a human never could. That’s why we have them, because they can work better and faster than our human brains in some tasks.

There are ways to understand these models, but they cannot be perfectly audited like a manual forecast done in excel. Instead they have to be interrogated. Not like a criminal wanted for war crimes but more like a therapist talking to their patient. There is no way for a therapist to know exactly what’s going on inside of their patients mind. But they can start to ask questions that can give clues into what’s going on and see why the person has made past decisions in their life.

The best resource I know on explaining ML models is Interpretable Machine Learning by Christoph Molnar. Here’s a quick overview of the method’s described in the book.

  • Model Specific: These are models like linear regression or a single decision tree where we can see exactly what’s going on under the hood. The model structure is more like an excel formula we can trace step by step. But because they are easy to explain, they are simple in nature and may not produce the most accurate forecast. That is the tradeoff between having a forecast that can be easily explained versus having a forecast that is the most accurate. The more accurate the forecast, the more likely you cannot explain it perfectly.
  • Model Agnostic: For more advanced models like gradient boosted trees and deep learning, we can use methods that approximate what’s going on under the hood of a complex model. This is when we have to act like a therapist and start asking questions to our model and see what answers it gives back. There are two ways of doing this.
    • Global Interpretability: This uses methods that can see overall what’s impacting the model the most. For example you can see what input variable (or feature) is the most important in the model overall.
    • Local Interpretability: This uses methods to see what’s going on for each individual forecast data point. For example you can see for a specific future forecast what’s impact that number the most.

The last thought I’d leave you with is this. Have you ever not used ChatGPT because you couldn’t get an explanation of its answer? For example maybe you asked it to help you write some excel formulas to format dates. Do you trust the output it gave you because the excel formula was correct or because it could tell you exactly how it came to that conclusion? What if the explanation it gave was made up or a hallucination? If the excel formula is correct you would still use it right? Even the CEO of OpenAI, Sam Altman, cannot explain how models like GPT-4 think under the hood. But hey, ChatGPT was still the fastest growing product of all time. Sometimes imperfect interpretability is ok. But maybe your CEO is still demanding an explanation of the forecast numbers, so this is still a hard problem to solve in finance.

Back to Table of Contents

What models to use, should we use deep learning

People are always attracted to the hot new thing, and I can’t blame them. New is exciting. When it comes to ML forecasting, new isn’t always better. The newest trend in ML is all about deep learning. Or models that can mimic the human brain. While they work really well for things like analyzing photos and text, using them on tabular data (aka excel data) hasn’t always worked out well. That’s why I recommend using deep learning last.

Here are the models to use first, then you can always resort to deep learning if need be.

  • Univariate Models: These are the simplest forecasting models. Since they only need one variable, hence the name univariate. If you want to forecast revenue, then you only need historical revenue and you’re off and running. They run extremely fast and can scale to millions of data combinations without spending too much on cloud compute. Here are a few popular ones.
    • ARIMA: An ARIMA (AutoRegressive Integrated Moving Average) model predicts future values in a time series by combining differencing (modeling the difference between periods), autoregression (using past values), and moving averages (using past forecast errors). It’s the most common univariate model in the forecasting game.
    • Exponential Smoothing: Forecasts future values in a time series by applying decreasingly weighted averages of past observations, giving more importance to recent data points to capture trends and seasonal patterns.
    • Seasonal Naive: Predicts future values by simply repeating the observations from the same season of the previous period, assuming that future patterns will mimic past seasonal cycles. Don’t sleep on this one! You’d be surprised how often it comes in handy as a good benchmarking model to compare with more complicated models.
  • Traditional ML Models: After trying univariate models, it’s time to try more traditional machine learning models. These are models built specifically for tabular data, or data that can live in a SQL table or excel spreadsheet. These models are multivariate, which allow them to incorporate outside variables as features to improve their forecast accuracy. They require more handling than a model like ARIMA, since they need feature engineering and proper time series cross-validation. Multivariate models can also learn across multiple time series at the same time, instead of being trained on just a single time series like a univariate model. Here are a few common multivariate models.
    • Linear Regression: Predicts future values by fitting a line to the historical data, where the line represents the relationship between the dependent variable and one or more independent variables.
    • XGBoost: Predicts future values using an ensemble of decision trees, boosting their performance by iteratively correcting errors from previous trees, resulting in a highly accurate and robust prediction model.
    • Cubist: Predicts future values by combining decision trees with linear regression models, creating rule-based predictions that incorporate linear relationships within each segment of the data for greater accuracy.

Back to Table of Contents

What programming language or framework to use

Should I use python? But what if I learned R in my statistics class? What about stata or good ole javascript? Ask 10 data scientists what programming language to use and you’ll probably get 10 different answers. There is no right answer. It’s kind of like arguing what hammer to use when building a house. Shouldn’t we just be worried about getting the house built? And make sure we don’t screw it up?

Here is my take on the ML language wars. You should use both, python and R. Different languages offer different things, and depending on the task you might want to use one over the other. Often it’ll boil down to specific open source software that might only be available in one language but not the other. So over the course of a long data career it’s probably a good idea to be competent at both.

With that said, if you could only learn one programming language, learn python. That’ll get you the farthest the fastest in terms of useful knowledge to get building. I think in a few years these debates will go away, because large language models (LLM) will come and save the day. Imagine writing code in your favorite language, using the packages you like, and then have a LLM take your code and translate it into blazing fast machine code that runs like a race car. So it won’t matter if you only know one language or the other. That’s where I hope we’re headed.

Back to Table of Contents

Using large language models like ChatGPT to forecast

With the explosion of large language models (LLM) that can do everything from tell dad jokes to write production grade code, can’t we just offload all forecasting work to them? In essence you could, but I think it’s kind of overkill and leaves a lot to be desired. LLMs take in text, and spit out text. They are good with words, but not that good with numbers. They can’t perform on par with a calculator, because that’s not how they were designed. So you can’t just copy a table from excel, give it to ChatGPT, and hope to get next quarters revenue forecast. It might give it to you, but it won’t be a good forecast. It might even make something up.

The more sensible route to take is to have the LLM write code that can create forecast models and have it execute it for you. For example use the code interpreter feature in ChatGPT to have it take your uploaded CSV file and execute a bunch of python code against it to get a final forecast. This kind of workflow might be good for initial exploration, but it shouldn’t be used in a production setting where you need an updated forecast each month. It’s kind of like needing a place to sleep and each night you build a new house from scratch, only sleep there one night, then the next night build another house from scratch. Having LLMs produce code on the fly each time can lead to inconsistent results that may not be reproducible when you ask ChatGPT to do it again. You could take the code from the first forecast iteration, save it, and either run it yourself each time or give it to ChatGPT as a prompt. But even then you are still missing out. Some automated forecasting packages, like the one I own called finnts have over 10,000 lines of code. So a LLM will most likely not be writing that much code to answer one prompt, and if they did having you trying to save and manage that code is out of the question.

I think LLMs can still have some part in the forecasting process. They are useful before and after the actual model training is done. For example you can have code interpreter analyze your data for things like outliers or help in running correlation analysis to see what variables could help improve your forecast accuracy. Then you will take those learnings and run the forecast outside of the LLM environment, or you could have the LLM call an outside function to kick off a forecast. This is called “function calling”, where a LLM can use an outside tool (most often calling an API via code) that can accomplish the task a LLM cannot (like getting the current weather). LLMs can then be used after the forecast process is ran to then analyze the final forecast outputs. Tools like code interpreter can make charts and analyze historical back testing accuracy.

Maybe in the future the LLM can help explain how the final forecast was created, or better yet answer questions about forecast variance once actuals land in the future. But using it to actually create predictions or train models itself is still a tall order. With that said there are new time series specific LLMs that are being released, so this is an exciting area to watch closely.

Back to Table of Contents

What level of accuracy is good

There is no right answer. It all depends on the specific data you are using and how it compares to non-ML approaches you have done previously.

A common metric in time series forecasting is the MAPE error metric. MAPE stands for mean absolute percentage error. This is very similar to variance to forecast percent metrics you might already use. Think of it as the average percent error (as an absolute value) across every forecasted data point. There are plenty of other metrics out there, but MAPE is a good one since it’s not dependant on scale and percents are easy for anyone to wrap their head around.

A MAPE of 5% means that on average, your forecast is off plus or minus 5%. So the closer to zero the better. If you ask any finance person what kind of MAPE they want for their forecast, almost all of them will say less than 1%. Or more than 99% accuracy. Think of accuracy as the inverse of MAPE. Maybe the ML forecast has a MAPE of 10%, but your previous manual excel model ended up having a 20% MAPE. The ML forecast has a 50% reduction in forecast error, even though it’s still in the double digits. So evaluating ML is always relative to what kind of performance you got with other forecast methods probably done in excel.

With that said, here are some rules of thumb I use when running my own ML forecasts.

  • Daily and Weekly Data: Less than a 10% MAPE is terrific. But even MAPEs as high as 30% are ok, because often we might sum up a daily forecast to a monthly or quarterly level. And when evaluated at that aggregate level you may start to have drastically improved MAPEs.
  • Monthly, Quarterly, Yearly Data: There are a few levels of accuracy that I think are good at this level. Again, it’s all relative to what kind of performance you had before using ML.
    • Less than a 5% MAPE is good
    • Less than a 3% MAPE is great
    • Less than a 1% MAPE is amazing

Even going from a 3% MAPE to a 2% MAPE is a big achievement. Because it’s a lot harder to do that than to go from a 15% MAPE to a 10% MAPE. It’s still the same level of improvement but once you get closer to zero the MAPE improvements seem to happen on a log scale. Where each percentage point you reduce continues to get harder as you get closer to zero.

Back to Table of Contents

Humans

How to make forecast owners accountable for the ML number

Having a ML model automate your forecast process is great, until you have to tell your CFO why your quarterly forecast is off by xyz%. “It was ML’s fault” is not the right answer. But it’s hard for a human to be on the hook for work that a machine did for them. Maybe the ML forecast came from another engineering team, so the final owner of the forecast maybe had no part in creating the forecast. This can make accountability hard.

In order to get people on board with transitioning more of their work to ML, you need senior leadership buy in. People at the top need to be invested in the promise of ML and the tradeoffs it might provide. Since you cannot audit a ML forecast like you can with an excel model (tracing cell by cell) there is an essential leap of faith that has to happen.

In addition to having senior leadership buy in to improve accountability, being able to adjust the output from ML can also help. Financial analysts can impart their domain expertise about their business by making small manual adjustments to the ML forecast. That way ML can get you 80% of the way to a completed forecast, and a human might make adjustments on that final 20% to finalize the forecast. Having the forecast owners be “humans in the loop” of a ML process adds a sense of ownership, which can then improve accountability.

A final way to improve accountability is through the Ikea effect. Which states that we value things more if we are involved in making them. The best way for this to work is if the final forecast owners create their own ML forecast through self-serve tools that abstract away the complex of ML and allows analysts to upload data and get back forecasts with a few clicks of their mouse. This is exactly what we did in Microsoft finance and it has worked out well so far.

Back to Table of Contents

Building Trust in the ML forecast

Building trust in the ML forecast is the hardest part of using ML. There are three ways that have worked well in helping non-technical financial forecast owners warm up to and fully transition to using ML to forecast.

  • Historical Accuracy: The best way to get someone on board with a forecast into the future is to show them how well a similar forecast has performed in the past. It’s not a perfect proxy for future performance but gives the end user a good idea of how well a ML model is performing. Common performance metrics to use are MAPE (mean absolute percentage error) and RMSE (root mean squared error). MAPE is similar to existing variance to forecast calcs already done by financial analysts so it’s a good error metric to convey past performance. When calculating historical performance, aka back testing, it’s a good idea to create hypothetical forecasts for the most recent periods of your data. Making sure you cover enough historical time to ensure your ML model is robust. For example with a monthly forecast. You might want to just forecast the next 3 months into the future. If you have 5+ years of historical data you could have 4 historical back tests where you train a model and produce a 3 month forecast for each of the last 4 quarters in your historical data.
  • Prediction Intervals: Understanding the uncertainty of the future forecast can also help build trust. Think of prediction intervals as upper and lower bounds of the future forecast that convey a certain percent of probability that the future value will land in between these bounds. Common prediction intervals are 80% and 95%. For example, you might have a future forecast of $100 for next month, with a 80% prediction interval of $80 and $120. This means there is an 80% chance that the actual value of next month will be between $80 and $120. So the tighter the prediction interval, the better. Having an prediction interval that’s +-5% of the forecasted value gives more comfort to the end user than one that’s +-30%. Just make sure that the end user of the forecast knows that those upper and lower bounds are not a “bull case” and “bear case” of what could happen in the future, like they might be used to in other financial modelling. But instead just a way to capture the uncertainty of the future prediction.
  • Interpretability: Finally another good way to build trust is being able to explain how a future ML forecast was created. You will not be able to perfectly explain it like you can an excel model (by tracing through it cell by cell). But you can use methods to poke and prod a model to see what might be going on under the hood.

Back to Table of Contents

Who owns the ML creation process

Being at Microsoft we are lucky to have engineering teams (sometimes called IT in other companies) that sit inside of finance and report to the CFO. This allows us to have software engineers who are solely focused on improving the lived experience of each of the 5,000+ employees under the CFO.

On those engineering teams there are people like myself who work on ML. Most of my time is spent helping produce and evangelize ML forecasting. My team has built two unique ways of how ML gets created and consumed by financial analysts.

  • Standard API: This allows other engineering teams to call upon ML on demand to get a forecast. Without needing to set up the infrastructure to train and serve models. This process works hand in hand with other forecast centralization efforts we’ve done.
  • Self-Serve UI: On the other end of the spectrum is to have tools that allow any financial analyst to produce their own forecast with a few clicks of their mouse. This allows us to scale ML to every single person in finance, without needing to rely on an engineering team to help produce a forecast. The tool we built for this is called Finn, and you can learn more about it here.

Back to Table of Contents

How to get started with ML

Getting the ball rolling can seem like an insurmountable task. Thankfully it’s been easier than ever to get started with ML. Check out how we got started back in 2015.

Another way to make quick wins is to use something pre-built off the shelf that’s ready to go. Thankfully you can use it for free! We have open-sourced our ML forecasting code into a standard package called finnts. You can have your engineers take this code and quickly get up and running with ML forecasts within a week. It condenses almost 10 years of trial and error in Microsoft finance down to code that works at scale in production. Try it!

Back to Table of Contents

Going from ML as a triangulation point to replacing manual human forecasts

There’s a saying that 80% of ML projects never make it into production. This definitely applies to ML forecasting, but in a different way. Better put, 80% of ML forecast projects never make it past the triangulation point phase.

There are three distinct phases of adopting a ML forecast.

  1. Initial Development: Going from initial idea to a first prototype. Trying to see what a ML forecast would look like and if accuracy is good or not.
  2. Triangulation Point: Using a ML forecast every month or quarter alongside your existing manual forecast process. ML is another data point you “triangulate” with the actual forecast you will be using. So ML is nice to have but not required.
  3. Baseline Forecast: Using a ML forecast to replace your manual process. Where now the first step in your forecast cycle is to produce the ML forecast and build on top of it. Using the ML forecast as the initial “baseline” foundation. You have now “burned the ships” of your manual process never to look back. You are in it to win it with ML.

It’s easy to go from phase 1 to phase 2. With tools like finnts you can get a new forecast up and running in less than a day. But most ML forecasts go to the triangulation point stage and die. It’s kind of a bummer but a fact of life. Here are some tips to break through the resistance and get all the way to phase 3.

  • Have a timeline. Finance teams are busy. There is always more to do in the time allotted to do it. So having deadlines helps move work along. Instead of a “one day we will switch to ML” attitude you need a deadline like “we will switch to ML within the next two quarters”. This instills a sense of urgency and gets things moving. I’ve seen ML projects stall for 2+ years because there wasn’t a timeline set in place. An answer like “we’ll move to ML sometime this fiscal year” is not good enough. Keep pushing until a deadline is agreed upon, and do all you can to stick to it.
  • Get leadership buy in. The quickest way to change a financial analysts behavior is to have them hear it from their boss. Getting senior leadership buy in helps with the ongoing issue of prioritization. If the leaders make it a priority then every person on their team has no choice but to make it a priority. Things start to move a lot faster once that happens.
  • If accuracy is a concern, establish a benchmark to beat. If you ask any forecast owner what kind of forecast error do they want, they will probably say something like “less than 1% error”. Meaning 99% accuracy. This is all well and good, but if the previous manual forecast process had a forecast error of 10% then having ML be 9% or less is already a win. Make sure you compare the approaches and agree beforehand on switching once a certain benchmark is beat.
  • Squeaky wheel gets the grease. Constant contact with the end ML forecast user helps speed things up. I can’t tell you how many times I’ve seen finance teams come to me, over the moon excited about using ML to revolutionize their team. Only to have that same team fizzle out their ML usage over the next 2 months. ML projects take time, and momentum with business partners can be lost quickly. Something I’ve realized is simply staying in contact with them, even as little as 1-2 times a month, helps move people from ML curious to ML power users rather quickly. It may seem like you might be bugging them too much but keeping ML at the top of their mind will help in prioritization.

Back to Table of Contents

Building data science talent in finance

Hiring technical talent into the finance org is no easy feat. Here are a few challenges you have to overcome.

  • Lack of defined career path. A data scientist working in finance will most likely not work for a data scientist manager. If they are lucky they might work for a software engineering manager, but I haven’t seen that happen too many times in Microsoft finance. This makes it hard to attract strong data science talent and get them to stick around for many years. If they keep getting promoted there are not many other jobs they can rise up to and take. The more successful they get, the more likely they will have to move on outside of finance to grow their career. And that’s ok. So make sure you cultivate the career path of your data scientists, even if that means leaving your team.
  • Lack of mentors. Because there aren’t a lot of data scientists in finance, there is often no one they can go to for help. For example, I’ve never had a boss review my code. Not even once. So to know if what I built is any good I needed to go consult the opinions of others. Sometimes these can be other data scientists in the finance org, but often I’ve had to go outside of finance to other mature data science teams to get advice and feedback on my work. So for your data scientists, make sure they are connected to a larger data and AI community at your company. And if there isn’t one, ensure they are finding community outside of the company at conferences and online spaces.
  • Lack of data engineering. Garbage in, garbage out. If there isn’t good data, no amount of data scientists can create value for you. Even worse, don’t rely on a data scientist to fix your data problems first before they start training models. Building robust data infrastructure in finance should be the job of data engineers, not data scientists. Those two jobs cannot often be combined into one, because they require different skill sets and even different mindsets. So make sure you get your data house in order first before even thinking of getting data scientists.
  • Lack of complexity. Sometimes data scientists can get lost in building the coolest thing possible, instead of building something that helps the business. For example, a data scientist might use the latest hot thing from the world of deep learning to create a solution for the business problem, when in reality a simple linear regression could have taken 1/10th the time and give you the same results. Also when explaining their work to the end user in finance, they might dive right into the technical aspects of the project, and all of the cool statistics they performed to save the day and create something awesome. This kind of explanation works well with other data scientists, but it will scare off every finance person I know. One of the main jobs of a data scientist in finance is to abstract away all of the complexity of the craft and make their work simple to understand for the average finance person. It may feel like the data scientist is “dumbing down” all of their hard work, but this part is crucial. If the end finance user cannot understand what they are being given from the data scientist, then they will not trust it, which means they will not use it. So gone are the fun technical presentations showing off all of the bells and whistles of their work. Now all they need to do is explain in simple terms if what they built worked, maybe how it worked under the hood (simplest explanation possible), and how the finance end user can leverage the work to do their job better.

Ok, now that we know the challenges we must overcome to grow strong data scientist talent, here are some more tips around making data scientists successful in finance.

  • Borrow, rent, then buy. If you are starting from zero. Then I recommend trying to borrow a data scientist from elsewhere in your company. Then once you’ve gotten a few projects off the ground rent some outside vendors/contractors to keep the work going. Then once you’re really kicking butt either hire those vendors as full time employees or go out and hire other data scientists in full time positions.
  • Start small. If your team has the budget to hire 5 data scientists, maybe just hire two for now and see how it goes. Then ramp up over time. When new technology is booming, like the current generative AI wave, people have the tendency to overinvest. So start small and grow over time. Having too much work for your 1-2 data scientists to do is way better than having to lay off 1 of your 5 data scientists because of budget cuts that always seem to happen every few years.
  • Connect with mentors outside of finance. Make sure your data scientists have a community of like minded people whom they can share their work with and get feedback. This will most likely be with data scientists in other departments like marketing, sales, and product teams.
  • Always answer the “so what” question. Once you hear this concept you cannot unhear it. A boss of mine recently started asking me one simple question after I told him about a project I recently worked on, “so what”? I recently moved our model training infrastructure to a new Azure resource, so what? I’m working on improving the model ensembling techniques used in our ML forecasting models, so what? I’ve been asked to present to some outside customers about the work we do in Microsoft finance, so what? It’s a simple question but also a powerful one. If you cannot answer the so what question, you probably should go work on something else. Often the answer to the so what question boils down to a metric or number to measure impact of the work. If you cannot convey that to your manager, then you might be in trouble of making yourself busy instead of productive. Always be able to answer the so what question with data or customer testimonials.
  • Hire for more applied ML, not research ML. It’s better to hire someone who has spent their time putting ML solutions into production than someone who just got their PHD in deep learning. There is a difference between applied ML and research ML. If you do hire a PHD to work on ML stuff, they most likely will not be happy, this goes back to the lack of complexity I called out earlier. We’re not building rockets in finance, often times we don’t even need deep learning to get the job done. So this kind of work may not entice the PHD to stick around long. Instead hire someone who can bring data science to business expertise, which brings me to my next point.
  • Convert financial analysts into data scientists. The perfect scenario is to take existing financial analysts, who are already experts in the business, and either give them tools that turn them into data scientists without the code. Or actually have them take the red pill and go all the way down the ML rabbit hole, becoming a true data scientists. This fixes most problems. Your data scientist already has strong domain expertise in the business, they can already speak like a normal person to business partners without all of the technical jargon, and they’ve only ever known applied ML so they won’t waste time trying to build custom models from scratch. In my opinion it’s a no brainer and something that should become the standard going forward.

Back to Table of Contents