âA lot of people in universities aren’t very good at software engineering,â says Kenny Daniel, co-founder and chief technology officer of cloud computing startup Algorithmia. “I’ve always had more software engineering skills.”
That’s, in a nutshell, what makes Seattle-based six-year-old Algorithmia uniquely focused in a world overrun with machine learning offerings.
Amazon, Microsoft, Google, IBM, Salesforce, and other large companies have been offering cut-and-paste machine learning in their cloud services for some time. Why would you want to stay away from a small and young business?
No reason, unless this startup has a special knack for practical machine learning support.
This is the premise of the firm of Daniel, founded with Diego Oppenheimer, graduate of Carnegie Mellon and veteran of Microsoft. The two became best friends at CMU’s undergraduate level, and when Oppenheimer went into industry, Daniel went to pursue a doctorate in machine learning at USC. While researching ML, Daniel realized that he wanted to build things more than he just wanted to theorize.
“I had the idea for Algorithmia in college,” recalls Daniel in an interview with ZDNet. âI saw the struggle to get the job done in the real world; my colleagues and I were developing cutting-edge technologies [machine learning] models, but not really getting them adopted in the real world as we wanted. ”
He left USC and partnered with Oppenheimer to found the company. Oppenheimer had seen from the industry side that even for large companies like Microsoft there was a struggle to get enough talent to deploy things and put them into production.
The duo initially set out to create an App Store for machine learning, a marketplace in which people could buy and sell ML models or programs. They secured seed funding from venture capital firm Madrona Ventures and made Pike Place in Seattle their home. âThere is an awful lot of ML talent here, and the rents aren’t as crazyâ as Silicon Valley, he explained.
Their intention was to match machine learning consumers, the companies that wanted the models, with the developers. But Daniel noticed that something was breaking. The majority of customers using the service were using machine learning from their own teams. There was little transaction volume because the companies were just trying to make things work.
âWe said, okay, there’s something else going on here: People don’t have a great way to turn their models into scalable, production-ready, highly available, and resilient APIs,â he recalls. .
âA lot of these companies would have data scientists building models in Jupyter on their laptops, and wouldn’t really have a good way to hook them up to a million iOS apps that are trying to recognize images, or a pipeline. data back-end that tries to process terabytes of data per day. ”
There was, in other words, “a gap in software engineering”. As a result, the company has shifted from a focus on a market to a focus on providing an infrastructure to evolve customers’ machine learning models.
The company had to solve many multi-tenant challenges that were fundamental limitations long before these techniques became common with large cloud platforms.
Also: How do we know AI is ready for the wild? Maybe a review is needed
âWe were running functions before AWS Lambda,â says Daniel, referring to Amazon’s serverless offering.
Problems like “How do you deal with GPUs because GPUs weren’t designed for this stuff, they were designed for games to run fast, not for multi-tenant users to perform tasks in them . ”
Daniel and Oppenheimer began meeting with large financial and insurance companies to discuss how to resolve their deployment issues. Training a machine learning model may be suitable on AWS. But when the time came to make predictions with the trained model, put it into production for high volume of requests, companies ran into problems.
Companies wanted their own instances of their machine learning models in virtual private clouds, on AWS or Azure, with the ability to have dedicated customer support, metrics, management, and monitoring.
This led to the creation of an Algorithmia Enterprise service in 2016. This was made possible by new capital, a $ 10.5 million injection from Gradient Ventures, Google’s AI investment operation, followed by a $ 25 million round last summer. In total. Algorithmia received $ 37.9 million in funding.
Today, the company has seven-figure deals with large institutions, most for executing private deployments. You can get something like what Algorithmia offers using Amazon’s SageMaker, for example. But SageMaker is all about using Amazon resources only. The appeal of Algorithmia is that the deployments will run in multiple cloud installations, wherever a customer needs machine learning to live.
âA number of these institutions need to have parity wherever their data resides,â Daniel said. âYou may have data on site, or maybe you’ve done some acquisitions, and things are happening in multiple clouds; being able to have parity between these is one of the reasons people choose Algorithmia. ”
Amazon and other cloud giants each market their offerings as end-to-end services, Daniel said. But that flies in the face of the reality that there is a soup made up of a lot of technologies that need to come together for ML to work.
âIn the history of software, there has not been a clear end-to-end winner,â observed Daniel. “That’s why GitHub, GitLab, Bitbucket and all that continues to exist, and there are different CIs [continuous integration] and Jenkins, as well as different deployment systems and different container systems. ”
“It takes a fair amount of expertise to tie all of these things together.”
There is some independent support for what Daniel claims. Gartner analyst Arun Chandrasekaran puts Algorithmia in a basket he calls “ModelOps”. The “life cycle” application of artificial intelligence programs,
Chandrasekaran said ZDNet, is different from that of traditional applications, “due to the complexity and dynamism of the environment”.
“Most organizations underestimate the time it will take to bring AI and ML projects to production.”
Also: Recipe for selling software in a pandemic: be essential, add a little machine learning and focus, focus, focus
Chandrasekaran predicts that the ModelOps market will grow as more companies try to deploy AI and run into practical hurdles.
While there is a risk that cloud operators will integrate some of what Algorithmia offers, Chandrasekaran said, the need to deploy outside of a single cloud supports the role of independent ModelOps vendors such as Algorithmia.
âAI deployments tend to be hybrid, both in terms of covering multiple environments (on-premises, cloud) as well as the different AI techniques that customers can use,â he said. he declares. ZDNet.
Besides cloud providers, Algorithmia’s competitors include Datarobot, H20.ai, RapidMiner, Hydrosphere, Modelop, and Seldon.
Some companies can go 100% AWS, Daniel conceded. And some customers may be happy with the generic capabilities of cloud providers. For example, Amazon has made a lot of progress with text translation technology as a service, he noted.
But industry-specific or vertical market-specific machine learning is another story. A client of Algorithmia, a large financial company, needed to deploy a fraud detection application. âIt sounds crazy, but we had to figure all this out, how do we know that this data here is being used to train this model? This is important because it is a problem of their [the client’s] responsibility.”
The immediate priority for Algorithmia is a new version of the product called Teams that allows companies to host an invite-only hosted gathering of those working on a particular model. It can span multiple “federated” instances of a model, Daniel said. Pricing is based on compute usage, so it’s a pay-as-you-go option, compared to the annual billing for the Enterprise version.
Also: AI Startup Abacus Launches Commercial Deep Learning Service, Secures $ 13 Million Series A Funding
For Daniel, the divide he has observed in academia between pure research and software engineering is what has always brought down AI in the past. The so-called âAI winterâ periods over the decades were largely the result of practical obstacles, he believes.
âThese were times when there was hype for AI and ML, and companies put in a lot of money,â he said. “If companies aren’t rewarded, if there is a lack of progress, we might be looking at another round of hype.”
On the other hand, if more companies succeed in expanding, it may lead to the boom in the kind of market he and Oppenheimer originally envisioned.
“It’s like the Unix philosophy, these little things that come together, that’s how I see it,” he said. âAt the end of the day, it will just allow all kinds of things, completely new scenarios, and that’s incredibly valuable, things that we can make available in a free market for machine learning.â