Now, during the second half of 2023, we’re finding that the initial intense spike in investor enthusiasm around generative AI is beginning to stabilize. That’s good news; after years of activity in this vertical, we know it takes a critical lens to evaluate generative AI applications that go below the surface of GPT excitement.
One thing we have consistently looked at is one of the superpowers that startups can achieve when combining their own proprietary rule-based engines with LLMs (like GPT). Below, let’s drill down into two case studies of this application of generative AI from the Hetz portfolio: Anima (Fund I) and Tabnine (Fund I).
The Power of Combining Rule-Based Engines with LLMs
Rule-based engines are integral parts of AI applications. They use predefined heuristics and algorithms to make decisions, providing a stable, predictable basis for software functions. Layering Large Language Models (LLMs) such as GPT or proprietary models over these rule-based systems creates a hybrid approach, leveraging the strengths of both systems. LLMs contribute creativity and adaptability, while rule-based engines provide stability and consistency.
This ‘two-punch process’ involves first using a non-generative, rule-based AI code and then adding a layer of generative AI on top. This technique capitalizes on the inherent strengths of both models.
When rule-based engines and LLMs are combined, the success rate increases significantly, often achieving a 50% success rate right out of the box and more than 90% success rate with an LLM supporting system for error detection, fidelity testing, and a healing mechanism. This is a testament to the power of this hybrid approach.
Case Study: Anima converts design to code
Anima leads the design to code space, with over 500k installs on Figma, 300k installs on Adobe XD, and a customer base that include teams from Amazon, Cisco, Samsung, Deloitte and more. Anima has recently co-launched Figma’s new Developer Mode, where developers can turn Figma into React instantly and get code right inside their Figma. Clients such as Radiant, an IT consultancy, mention they save 50% of time for a POC/MVP using Anima.
Anima provides a platform for developers to convert design into code seamlessly. Its core is a rule-based code-generation engine, based on predefined algorithms and heuristics, offering a stable, efficient and predictable code generation process. The company’s game-changing innovation is the integration of LLMs on top of their base non-generative AI code. This combination enables the system to enhance and iterate the base code, leading to more efficient and creative code output, well suited for customer needs.
The iteration process in Anima involves running the enhanced code, testing its performance and fidelity of the interface, and using the feedback-loop to refine the code in subsequent runs. This continuous improvement process is integral to achieving a success rate of more than 90% - meaning that over 90% of the time, developers can save half their coding time with pixel-perfect, error-free trusted code that matches their teams’ coding conventions. This is a huge time saver for R&D teams.
Case Study: Tabnine assists developers with code completion
Tabnine is a powerful AI-driven code completion tool that assists developers in writing code faster and more efficiently - and it’s doing so with over one million active monthly users. Teams at Samsung, LG, Astrazeneca, and Accenture use Tabnine to boost development productivity with quicker and more efficient code. The company has partnered with Google Cloud to further advance generative AI capabilities for the Google Cloud partner ecosystem.
Like Anima, Tabnine’s AI uses a combination of rule-based heuristics and algorithms, multiple ML models, and LLMs. The rule-based engine provides a solid foundation, while the AI layer aids in generating code suggestions, taking into account the context of the existing code and a vast repository of publicly available code matching enterprise-grade standards.
Tabnine's success rate and productivity enhancements are impressive. Developers using the tool can significantly speed up their coding process while minimizing errors - with over 30% of code automated per developer on average, and teams gaining 15% or more in code-shipping productivity overall. This efficiency is in part due to the powerful combination of rule-based AI with a proprietary generative LLM layer.
What we look for at Hetz
None of this is new to us at Hetz, where we've been building and acting on our investment thesis around AI long before the ‘generative AI gold rush’ of 2023. Since backing Anima and Tabnine ~five years ago, we’ve invested in many other companies innovating with generative AI, as well as developing for LLM infrastructure and LLM security - newer and critical requirements in this rapidly maturing AI space.
If you’re developing a company using generative AI in a substantial way, we’d like to meet you.