Mondrian AI (official site) is a company that describes itself as having an “End-to-End AI Platform for the entire AI development process,” including a consulting service to put a complete solution into service from a customer’s business request and needs.
Mondrian’s Yennefer Enterprise AI Platform (link) looks very complete and handles hardware resource allocation, AI model deployments, and the data pipeline from where it is stored through processing to the end-user dashboard.
Although AI is the primary focus these days, the company has built expertise in data science and data visualization since they tend to use a similar stack and are in high demand by various customers.
Yennefer is designed to handle the complete development workflow and can interface with standard tools such as GitHub, the large public clouds where code and data are. Customers can then build their data processing with the model of their choice and define their system configurations by software. Finally, this is all deployed and managed in production.
Such a platform can greatly accelerate how fast a complete solution can be deployed. Most companies, even huge ones, don’t have the engineering resources and know-how for every business unit or key project to deploy a major AI project.
That’s where the consulting aspect of Mondrian AI can give it an edge with specific customers, especially those who want to deploy something complex relatively quickly. While there are more and more platforms that claim to perform similar tasks, the business is expected to provide its own engineering or consultants. Just finding or selecting these can already be challenging.
Customers who want to run their AI workflows on-premises might be articulately interested in Yennefer as it was built for that. Many startups like starting their operations on public clouds for convenience, but large, established companies are often stricter about where their data can reside.
Mondrian must also choose its projects carefully, as its engineering resources aren’t limitless. It has worked with prestigious companies such as Samsung, SK Telecom, Seoul University, and the Incheon Airport.
Interestingly, the company is also building its own ChatGPT-like system, optimized for the Korean language. The large language models (LLMs) we typically hear about are trained with English language content, and it’s not uncommon to hear developers complain about the lack of datasets to train LLMs in other languages.
Now, AI-based applications use the RAG (Retrieval augmented generation) architecture to build chatbots that can answer specific or proprietary questions. However, RAG relies on a high-performing LLM. Mondrian AI thinks its upcoming LLM can do the job, which could be a powerful argument for Korean companies to adopt the platform.
Building a platform with such depth is impressive for a young company. There are a lot of moving parts that require cross-disciplinary expertise in addition to production knowledge as well.