“Generative AI is just a phase. What’s next is interactive AI” said Mustafa Suleyman, the co-founder of Google’s DeepMind.

Rich Miner, the founder of Android, at the AGI House event described the future of UX as follows:

  • “We will interact with computers in natural language, beyond just pushing buttons and writing code.
  • Everyone will be able to collaborate on building apps, and apps will be personalized for each user.
  • The user experience will be flattened, i.e. users’ natural language commands (even the complex ones) will be decomposed automatically across different UX components.”

At Lexie.ai, we have developed a programming framework for building AI-native applications. Our customers include e-commerce, fintech, and prop-tech companies that are looking to improve their user experience and stay ahead of the competition using AI. Our customers typically go through a journey of improving their software at different stages of maturity, from introducing a standard chatbot – separate from their main application, to adding AI-copilots to their software, and finally adopting an AI-native user experience. In this blog, we will elaborate on these alternatives and describe their pros and cons.

Chatbots: The Adaptive Conversationalists


Chatbots have become ubiquitous in the digital landscape. Powered by Large Language Models (LLMs), these AI-driven conversation agents aim to provide users with quick answers and assistance in a conversational manner. This evolution has led to a more personalized and enjoyable user experience, thanks to their ability to comprehend natural language and adapt to a broad range of queries.

Chatbots excel at handling routine tasks and relatively simple queries, making them valuable for customer support and information retrieval. Their adaptability makes them a versatile tool that can be tailored to specific domains and applications.

Co-Pilots: The Guiding Hands

Co-pilots take a hands-on approach to assist users in navigating more complex applications. They work in tandem with the main application UI, offering guidance and assistance within some of the functionalities in the app. Think of them as a digital assistant sitting shotgun, helping you navigate the road of specific functionalities within an application’s interface.


Co-pilots are a valuable addition to user experience. They can perform tasks like filling out forms, guiding users through complex workflows, and even helping with troubleshooting within selected functionalities of an application. However, their expertise is often confined to specific modules, limiting their versatility.

AI-Native Apps: The Future of Seamless Interaction

Now, let’s talk about AI-native apps, the game-changers in the world of user experience. What sets AI-native apps apart is that they are not merely a feature or an afterthought. When an AI-native app is built, or when an existing app is transformed into an AI-native one, it revolutionizes the user experience. Here’s how:

  1. Comprehensive AI-copilot for Core Functionalities: AI-native apps inherently provide co-pilot functionality for all features within the app. Unlike co-pilots, which are typically limited to assisting within specific functionalities, AI-native apps seamlessly integrate AI across the entire app ecosystem. This means you have an AI-powered co-pilot at your disposal for every aspect of the app.
  2. Novel Functionalities: AI-native apps take things a step further by automatically generating novel functionalities. These functionalities are based on the application’s specification, knowledge base, and a set of guardrails provided by the app’s author. The “guardrails” specify which parts of the knowledge base can be used to create novel functionalities. This ensures that AI-native apps enhance rather than disrupt the user experience while adhering to predefined guidelines set by the app’s author.


In essence, AI-native apps are a holistic approach to transforming the way we interact with technology. They don’t just assist users; they enhance every facet of the app, providing guidance and automating tasks seamlessly across the entire application using a simple natural language interface. Plus, their ability to generate novel functionality based on the app’s knowledge base, within the specified guardrails, adds a layer of innovation and efficiency that chatbots and co-pilots simply cannot match.

The Superiority of AI-Native Apps

So, why are AI-native apps superior to chatbots and co-pilots? Here are a few key reasons:

  1. Comprehensive Assistance: AI-native apps provide co-pilot functionality for all features within the app, offering users assistance across the board using a simple natural language interface.
  2. Automated Innovation: AI-native apps generate novel functionalities based on app specifications, enhancing user productivity and problem-solving capabilities while adhering to the app author’s guidelines.


In conclusion, while chatbots, including advanced LLM-based chatbots, and co-pilots have their merits in assisting users within specific functionalities of an application, AI-native apps represent the next frontier in user interaction. They are more versatile, seamless, and efficient, offering a superior user experience that adapts to your needs across the entire app ecosystem. We think with AI-native apps, we are going to witness a revolution in the way we interact with technology, making our digital lives simpler, smarter, and more enjoyable.

Related Blogs:

AI Reasoning: Balancing Generality and Reliability Friends of Lexie Blog Series – July Edition

In the ever-evolving landscape of artificial intelligence, the potential for transformative impact is monumental. According to Goldman Sachs, AI has the potential to exert a staggering $7 trillion influence on the global GDP. However, as AI rapidly advances, a new challenge emerges: the delicate balance between generality and reliability in AI reasoning.

The generality in LLM systems is quite impressive. They are pretty great at learning from data and applying what they have learned to new data. For example, you can use the same AI algorithm to analyze healthcare data, e-commerce data, or insurance data. But current LLM systems still have a lot of limitations, especially when it comes to complex tasks. The main reason for these limitations is that they cannot reason, also known as System 2 Intelligence.

An ideal reasoning module can break down complex tasks into simpler ones, retrieve relevant information in case it does not have enough information, or use other non-AI tools to fulfill user queries if needed. To address this problem, we often put LLM systems inside traditional software structures.  One example of this is the LangChain tool, along with a few other projects. This is a bit like putting a coarse-grained reasoning system into standard software. But this approach limits the generality of AI systems.

On the other hand, there are agentic solutions. These approaches work probabilistically, which means they do not follow strict rules and they are quite generic since they use LLM models themselves for reasoning.  Projects like WebGPT, AutoGPT, and ReAct fall into this category. However, they are not reliable enough for production applications, especially enterprise-grade ones. 

Sure, these two paradigms are influential, but they are not the only ways to explore AI reasoning. The truth is, the lack of a versatile and dependable reasoning engine is a major bottleneck that is holding back countless AI initiatives. It is keeping them stuck in the demo phase, and it is preventing them from delivering a seamless user experience in production systems. 

Introducing Lexie: Enhancing AI Reasoning with Thought Graphs

Meet Lexiea new AI that uses a reasoning representation called Thought Graph (“TG”). The TG and its framework neatly separate the complexities of AI reasoning into four distinct parts: 

  1. Generating a reasoning space
  2. Searching the reasoning space for the best and most reliable option
  3. Executing the reasoning
  4. Refining the reasoning with human feedback.

This separation gives us the flexibility to build a range of reasoning abilities (not just two extremes), striking a careful balance between reliability and generality. In our opinion, a reasoning representation like TG is as essential to AI applications as HTML is to web development.

An Open Representation for Reasoning

TG has been a huge win for us, and it’s now an essential part of our system. It gives us the flexibility to use a variety of reasoning methods, and we can implement essential safety mechanisms no matter which method we choose.

We have been in conversation with a number of industry experts who are working on some very innovative AI projects. Their collective insights have led us to a clear conclusion: open representation of reasoning is the way to go. That is why we are building a community of thought leaders to enrich our Thought Graph and make it open source in the future.

Upcoming Blog Series and Call to Action

We are excited to share a series of blog posts in the coming months that will dive into the insights we have gathered from over two and a half years of building and refining the Thought Graph and deploying it across diverse applications. Whether you are just starting out on your AI journey or you are a seasoned pro, we think you will find this series informative and thought-provoking. Stay tuned!

If you believe in our vision of creating an open representation for reasoning, we would love to connect with you. Please subscribe to our blog series and become a part of the vibrant discourse that is shaping the future of AI reasoning. Together, we can create a future where generality and reliability seamlessly coexist in AI reasoning.

Thanks for your interest and support! We’re dedicated to making every app AI-native in a reliable way. Current AI models and LLMs are not reliable enough for enterprise applications, and their behavior is hard to analyze or explain. We’re working hard to change that.

We’re taking a graph-based approach to grounding our AI model. This representation lets different parties (programmers, product designers, human auditors, software modules, and AI models) analyze, understand, and improve the reasoning. We believe this representation, which we call Thought Graph, will be the foundation of AI-native applications, just like HTML is the foundation of web applications.

As we expand our “Friends of Lexie” list, we would like to welcome you to receive updates about our exciting venture. Please feel free to review our introduction to Lexie in this blog. If you have any questions or feedback, please contact us. We hope you enjoy this article.

Market Demand for AI-Native Apps

Replit’s data indicates that the number of AI projects on their platform has increased 34-fold since last year. A Databricks survey of 9,000 organizations also shows that almost every CEO is asking their business units to develop an AI strategy. Our observations from the market are similar. Our customers want to use AI in their customer support and content creation. They want to deploy AI-native solutions quickly and without allocating too many of their resources.

Lexie Updates

Product Highlights

We learned that our customers are interested in using our internal development tool – Thought Graph Low-code to customize their applications. We have decided to add this tool as part of our offering as well. Please feel free to watch the demo below.

Introduction to Lexie

Lexie is a groundbreaking startup poised to transform the way we interact with applications. With our cutting-edge low-code technology, we are making every app an AI-native app. It can increase the productivity and efficiency of building apps by 20x.

Imagine a user experience where you can effortlessly communicate with any application using natural language, in addition to and in coordination with other input actions (such as tapping, typing, and clicking). Lexie drives the application’s UI on behalf of the user, enabling seamless interaction and automating complex tasks that require touch points across different parts of the application. It eliminates unnecessary interactions and also automatically creates novel interactions if needed. Our customers include large e-commerce providers as well as startups from InsureTech, PropTech, and FinTech sectors.

Elevating User Experience with a Multi-Modal Reasoning Engine

Lexie was founded in 2021 with the vision of building a platform that would allow developers to transform every application into an AI-native application, where users can interact with the application using natural language. The AI agent (initially referred to as an “overlay bot”) performs the necessary reasoning and drives the application automatically, skipping unnecessary user interactions and creating novel UIs as needed. The company was an early entrant into the market. Therefore, we decided to build applications using our development platform rather than presenting the platform itself as the product. Lexie built multiple co-pilots/chatbot agents for its e-commerce customers.

The launch of ChatGPT resulted in a substantial increase in interest in our technology. Customers were impressed by the speed at which Lexie was able to develop software, even though building AI-native software is more challenging than developing traditional software.


The truth is that we kept building our development platform – Thought Graph – and used that to build our customer use cases. Over the last 5 months, Lexie decided to change its strategy and expose Thought Graph to the customers.

Why is Reasoning Important for AI-native Applications?

Large language models (LLMs) are not capable of handling complex scenarios on their own. While their performance is quite impressive when used as System 1 intelligence, they are far from ready for enterprise applications when used as a reasoning engine for System 2 intelligence.


If you follow the advancement of AI research you probably know that reasoning in a generic sense is one of the most important areas that top minds are working hard to solve. Examples are Tree of Thoughts and Thought Cloning which are using existing AI models to improve the reasoning and Self Supervised Learning and Generative Flow Networks which are using completely new models to improve the reasoning.

When it comes to generating and driving the user experience, LLMs have similar shortcomings and therefore we need a robust reasoning engine. However, there are three main considerations:

  1. Since we are solving the problem for a particular domain, we can leverage a fine-tuned version of existing AI models grounded using domain knowledge.
  2. Our main goal is to build a model for a software application to predict the right action(s) given a user command. Software apps have a good amount of documentation and source code that can help to build such a model.
  3. Collecting interaction data and user feedback is as important, if not more important, as the AI model. Such data will be key to improve and evolve the reasoning model.

We recommend that our customers deploy a reasoning engine as part of their applications today, as it is highly reliable and enterprise-ready for basic UI interactions. This will allow them to collect user feedback and improve the reasoning for natural language interactions over time, which we refer to as evolvable reasoning.

Currently, every software application consists of three important components: front-end, back-end, and data layer. We believe that all future applications will have a reasoning engine as the fourth component.

Thought Graph – A Representation for Reasoning

Thought Graph is an intermediate representation of reasoning. We think Thought Graph will be the cornerstone of AI-native applications the same way HTML became the cornerstone of web applications. A Thought Graph server is a reasoning engine that enables applications to process natural language queries, even complex ones. It does this by breaking down the reasoning process into four steps:

  1. Generation of the reasoning space including retrieval of supplementary knowledge
  2. Search in the reasoning space to find the best reasoning alternative
  3. Execution of the chosen alternative(s) for reasoning
  4. Improvement of the reasoning based on reinforcement learning


Thought Graph grounds its reasoning in business-specific knowledge and data. Customers can begin with level 1 reasoning by creating the Thought Graph for their application using our low-code tool. They can then evolve the reasoning to level 2 and level 3 reasoning (analogous to the level of automation in self-driving cars) based on detailed business requirements, as well as their users’ feedback.

When you have the application logic in the form of a Thought Graph, different modality of the applications including web application, chatbot, voice agent, or a mix of them is automatically generated.

The standard architecture for deploying Thought Graph is demonstrated below. Our Thought Graph server sits between the front-end and back-end of the application and leverages Lexie’s proprietary AI model as well as the Thought Graph database to fulfill the application requests.


Lexie vs Other AI Middlewares

Lexie is leveraging open-source projects like LangChain and LlamaIndex. Developers can choose to use LangChain alone (without using our Thought Graph server) to develop their applications. The other options are using platforms like Fixie or Adept. Our Thought Graph is different from the competition in several ways.

  • First, Lexie offers a superior user experience for different modalities – all auto-generated from the same Thought Graph.
  • Second, Thought Graphs can leverage the generality of AI models for reasoning, making them more versatile than other platforms.
  • Third, our low-code technology enables rapid development of AI-native software.
  • Finally Thought Graph architecture is designed for enterprise applications, making it more reliable, analyzable, and scalable than other platforms.

The following figure depicts Lexie’s competitive advantage with respect to generality and reliability, as well as user experience and developer experience.


As you can see, Thought Graph architecture offers a significant advantage over the competition in all four areas. This makes Lexie the clear choice for businesses looking for reliable, scalable, and versatile AI-native applications.

Upcoming Blog Series and Call to Action

We’re excited to share a series of blog posts in the coming months that will dive into the insights we’ve gathered from over two and a half years of refining the Thought Graph and deploying it across diverse applications. Whether you’re just starting out on your AI journey or you’re a seasoned pro, we think you’ll find this series informative and thought-provoking. Stay tuned!

If you’re down with our vision of creating an open representation for reasoning, we’d love to connect with you. You can subscribe to our blog series through this link and become a part of the vibrant discourse that’s shaping the future of AI reasoning. Together, we can create a future where generality and reliability seamlessly coexist in AI reasoning.

© Lexie. All Rights Reserved.