Thought graph

The AI transformation is a tangible reality now, and the pressing challenge is to democratize the power of AI for all, not exclusively for tech specialists. Over the last three years, our team at has been dedicated to developing ‘Thought Graph’—an AI-native programming framework. This framework is grounded in the innovative “Visitor Workflow Model,” which we will explore in depth in this and subsequent blog posts.

In our journey with Thought Graph, we challenge the conventional boundaries of software development. We believe and have steadily worked towards a future where the creation of robust, AI-driven applications isn’t confined to those with traditional coding expertise. Thought Graph is our answer to this challenge, a platform that empowers individuals, especially those with rich domain knowledge, to develop not just prototypes or proof of concepts, but fully-functional, complex applications.

Understanding the Thought Graph Model with Visitor Workflow

The Thought Graph with Visitor Workflow empowers citizen developers to assemble complex systems through an intuitive, no-code approach. This process is akin to a customer visiting an organization with a complex request. The customer is navigated through the workflow and necessary service employees get involved to accomplish the task. Every aspect of the request is broken down into simpler tasks, each managed by different employees. This method mirrors the capabilities of Thought Graph: if a developer possesses the critical thinking skills required to construct an organization, define workflows among employees, and specify each individual’s role, then that developer can adeptly use Thought Graph to become an efficient software developer.

In Thought Graph, the no-code user interface offers a visual tool where citizen developers can seamlessly create workflows using a drag-and-drop mechanism. This is similar to outlining an organization’s structure and the workflow interactions between various employees to complete the tasks. Additionally, developers can use the TG’s AI assistant to develop and modify their workflows using natural language. The visitor model in Thought Graph serves as an algorithm, ensuring each component performs its designated role and efficiently passes tasks to other components, maintaining a cohesive and streamlined workflow.

DMV Analogy

Imagine a Department of Motor Vehicles (DMV) and an individual arriving to obtain their driving license. This person (the visitor) first encounters a receptionist who  figures out what the user request is and guides the user for the first step in the process. Then the visitor will visit multiple different agents to go through the process of getting the driving license. He should provide necessary documents and information to an agent, undergo an eye exam and take a photo with another agent, complete a written test with yet another agent, and finally take a driving test. Meanwhile, unseen staff at the DMV perform background checks and document audits, integral to the license issuance process.

Now, imagine we want to create a software to automate the experience in DMV. Without going into too much detail and showing background workflows our high level TG would like the following. 

DVM analogy

manageable steps, each executed by a component (akin to a DMV agent). Some components interact with the user (the visitor) by asking the user to fill up a form (to collect the visitor’s information), while others may work independently or delegate tasks, all adhering to the application’s defined workflow in Thought Graph. Note that in the digital version of the DMV, still some tasks need to be done by real agents – at least for now – like an eye exam or proctoring the written exam or giving the driving test.

Understanding the parallels between orchestrating DMV workflows and constructing a software application is key:

1. Reception:
DMV –  The journey begins with a receptionist who guides you and sets the stage for subsequent steps.
Thought Graph – The root component in Thought Graph figures out the user request and routes it to the right component.

2. Workflow-based Task Management:
DMV – You navigate through various agents for tasks like document verification and eye examinations. Each DMV agent helps the visitor to find the next agent he/she should visit.
Thought Graph – Distinct software components sequentially manage parts of the task, akin to the workflow facilitated by DMV agents.

3. Defining the roles and responsibilities:
DMV – Each DMV agent knows exactly what are his/her roles and responsibilities, including the information needed to gather from the visitor (using forms perhaps) – or the documents he/she needs from the user.
Thought Graph – We also define the description of each component and the type of interaction the agents need to have with the user through the UI.

4. Specialized Agents for Targeted Tasks:
DMV – Certain agents specialize in specific tasks, such as conducting eye exams.
Thought Graph – The application includes specialized components for different functions, similar to DMV specialists.

5. Behind-the-Scenes Operations:
DMV – While you engage with front-office agents, others work quietly in the background, crucial to the DMV’s functionality.
Thought Graph – Certain components function behind the scenes, handling operations like background checks and reviewing the validity of the information.

6.Natural Language Interaction:
DMV – The process involves natural language communication with agents.
Thought Graph – when you build a software with thought graph, the software is AI-native – you  can talk to the agents in the app using natural language.

Conclusion: A New Era for Citizen Developers

Thought Graph with Visitor Workflow Model ushers in a new era in software development. It breaks down the barriers of traditional programming, allowing citizen developers to create complex, AI-native applications with ease. This inclusive approach ensures that software solutions are not only technically sound but are also deeply rooted in domain-specific knowledge. As Thought Graph continues to evolve, it stands as a testament to the democratization of technology creation, heralding a future where everyone has the potential to be a developer.


Related posts: 
Revolutionizing User Experience With AI Chatbots Co-Pilots and AI Native Apps

Natural Language Code Revolution AI Leading the Way For Citizen Developers

Historically, attempts to replace traditional programming languages with no-code visual tools have fallen short. Real-world applications still predominantly rely on powerful programming languages like C++, Python, JavaScript, and React for two main reasons:

  • Beyond a certain complexity level, managing software with visual tools alone becomes impractical.
  • Customization, essential for serious applications, often demands more control than traditional no-code tools offer.

The following shows some of the existing no-code solutions – credit to Sacra research group.

The Amplified Flywheel Effect in No-code Platforms with AI Integration

The flywheel effect in no-code platforms, a self-reinforcing cycle that propels the platform’s growth and efficiency, is profoundly enhanced by AI. This can be visualized in a model with seven element in it:

  • More Developers: The cycle starts with an increase in the developer base, leading to:
      • More Components – A diverse range of building blocks.
      • Better Composability -Enhanced ability to integrate these components.
      • Improved Customizability: Making components more adaptable to specific needs.
  • Generality and Ease of Use (Boosted by items 1.a, 1.b, and 1.c): As components become more versatile and user-friendly, the platform’s general applicability and ease of use improve, fueling:
  • Higher ROI: Enhanced generality and usability lead to a higher return on investment, which in turn attracts more developers, thus completing the cycle.

AI-driven tools bring unprecedented efficiency to component development, integration, and customization, making the no-code platform more powerful and appealing to a broader user base. This AI-enhanced flywheel effect promises a new era of growth and innovation in no-code development.

Flywheel efect

We are on the cusp of an AI-driven revolution in no-code development, a shift that promises to simultaneously tackle the complexities and enhance customization in unprecedented ways. This synergy between expansion and enhancement ignites a virtuous cycle, catapulting the return on investment (ROI) into a realm far beyond the reach of traditional no-code tools.

Related blog post: 
Redefining AI-Assisted Software Development – How Thought Graph is Different From Other Solutions?

In the burgeoning field of AI tooling and AI-assisted code generation, Lexie stands out with its innovative approach. We reviewed more than 50 AI/agent tooling platforms. E2B presented the landscape of these tools in a well structured figure as below – although there are many more projects that are not covered in this diagram.

Our extensive interaction with customers has revealed that to truly democratize AI’s power, a solution must embody certain key characteristics. Here’s how Lexie meets these crucial demands:

1. Genuine No-Code Innovation

At Lexie, ‘no-code’ is more than a buzzword—it’s a practical reality. We have transcended the limitations of random code generation that often necessitates programmer intervention. Our platform is designed with the simplicity of team organization in mind. If you can define roles and responsibilities within a team, you can seamlessly build software with Lexie. It’s intuitive, user-friendly, and eliminates the need for any coding skills.

2. Scalability from Simple Workflows to Complex Full-Stack Applications

Lexie isn’t confined to simple task management. It shines in crafting sophisticated, full-stack applications. Whether it’s developing intricate e-commerce systems or custom CRMs, Lexie transforms daunting complexity into manageable simplicity. Our platform is versatile, catering to a broad spectrum of development needs from basic workflows to intricate, full-stack applications.

3. Robust and Analyzable AI Reasoning

A key differentiator for Lexie is its powerful AI reasoning capabilities. This feature ensures that application development is not only reliable but also straightforward, catering to users of varying expertise levels. Our robust AI framework underpins a solid and analytical foundation for development, allowing for deep insights and reliable outputs.

Despite the presence of several solutions in the market, none have fully addressed these multifaceted requirements. Understanding and tackling the complexities of this challenge was a journey for us. However, we are confident that the innovation of Thought Graph by is a game-changer. It has the potential to significantly alter the landscape of software development and the creation of intelligent systems on a broader scale.

Related blog post: 



Citizen developer with AI Vs Traditional developers

In 2017, Jensen Huan, CEO of NVIDIA, stirred up the tech world with his statement about “AI eating the software.” In recent years, AI code generation has gained significant traction, with OpenAI’s custom GPTs leading the way. This technological leap has sparked a debate between two extremes: some believe it’s “game over” for traditional software development, while others see AI as suitable only for simple tasks. At Lexie, we take a more nuanced view, recognizing the transformative potential of AI in programming while acknowledging its current limitations.

The Nuanced View

Rather than prematurely declaring the end of traditional software development or dismissing AI’s capabilities, we believe we are witnessing a paradigm shift in programming. This shift, which we consider more significant than previous technological revolutions like personal computers, the internet, mobile, or cloud computing, introduces a new era of “citizen developers.”

Citizen developers do not require expertise in standard programming languages. Instead, they excel in:

  1. Understanding Customer Pain Points: They have a knack for comprehending the pain points of customers and discerning their preferred mechanisms for resolving those issues.
  2. Specifying Software Requirements: Citizen developers are adept at specifying software requirements using natural language, wireframes, flowcharts, or a combination of these methods.
  3. Breaking Down Complex Problems: They excel at breaking down intricate problems into modular agents that collaborate seamlessly to accomplish complex tasks.
  4. Leverage existing software libraries: Citizen developers can harness the vast array of existing software components using natural language invocation and parameterization.  
  5. Establishing Guardrails and Checks: They specialize in specifying guardrails, checks, and balances to ensure that these agents align precisely with the specified workflow.

qualities of citizen developers

Challenges and Opportunities

This paradigm shift presents significant opportunities for individuals who deeply understand specific industries and workflows, regardless of their technical background. Identifying pain points and optimizing workflows remains a valuable skill. 

Despite the exciting prospects, several challenges must be addressed for this technology to scale effectively:

  1. Alignment and Analyzability: Ensuring that all AI-driven reasoning remains transparent and safe, addressing concerns about AI decision-making.
  2. Data Quality: Acknowledging that the quality of AI models heavily depends on the data used for training. Addressing inaccuracies and conflicts in data to avoid poor AI decisions is crucial.
  3. Compensation for Domain Experts: Implementing mechanisms to fairly compensate domain experts and data providers who play a pivotal role in fueling AI systems with intelligence.
  4. Meta Reasoning: Establishing a self-improvement cycle of meta-reasoning over time, reducing the need for extensive human intervention while continually enhancing the system’s intelligence.



The software industry is on the brink of a significant transformation, driven by AI and the rise of citizen developers. While challenges lie ahead, the potential for improving workflows and collaborations between AI and domain experts is immense. By addressing these challenges and embracing this paradigm shift, we can unlock new possibilities and usher in a more efficient and innovative era of software development. The future may not be the end for the software industry, but rather a promising new beginning.

Related blog post: 
Revolutionizing User Experience With AI Chatbots Co Pilots and AI Native Apps


“Generative AI is just a phase. What’s next is interactive AI” said Mustafa Suleyman, the co-founder of Google’s DeepMind.

Rich Miner, the founder of Android, at the AGI House event described the future of UX as follows:

  • “We will interact with computers in natural language, beyond just pushing buttons and writing code.
  • Everyone will be able to collaborate on building apps, and apps will be personalized for each user.
  • The user experience will be flattened, i.e. users’ natural language commands (even the complex ones) will be decomposed automatically across different UX components.”

At, we have developed a programming framework for building AI-native applications. Our customers include e-commerce, fintech, and prop-tech companies that are looking to improve their user experience and stay ahead of the competition using AI. Our customers typically go through a journey of improving their software at different stages of maturity, from introducing a standard chatbot – separate from their main application, to adding AI-copilots to their software, and finally adopting an AI-native user experience. In this blog, we will elaborate on these alternatives and describe their pros and cons.

Chatbots: The Adaptive Conversationalists


Chatbots have become ubiquitous in the digital landscape. Powered by Large Language Models (LLMs), these AI-driven conversation agents aim to provide users with quick answers and assistance in a conversational manner. This evolution has led to a more personalized and enjoyable user experience, thanks to their ability to comprehend natural language and adapt to a broad range of queries.

Chatbots excel at handling routine tasks and relatively simple queries, making them valuable for customer support and information retrieval. Their adaptability makes them a versatile tool that can be tailored to specific domains and applications.

Co-Pilots: The Guiding Hands

Co-pilots take a hands-on approach to assist users in navigating more complex applications. They work in tandem with the main application UI, offering guidance and assistance within some of the functionalities in the app. Think of them as a digital assistant sitting shotgun, helping you navigate the road of specific functionalities within an application’s interface.


Co-pilots are a valuable addition to user experience. They can perform tasks like filling out forms, guiding users through complex workflows, and even helping with troubleshooting within selected functionalities of an application. However, their expertise is often confined to specific modules, limiting their versatility.

AI-Native Apps: The Future of Seamless Interaction

Now, let’s talk about AI-native apps, the game-changers in the world of user experience. What sets AI-native apps apart is that they are not merely a feature or an afterthought. When an AI-native app is built, or when an existing app is transformed into an AI-native one, it revolutionizes the user experience. Here’s how:

  1. Comprehensive AI-copilot for Core Functionalities: AI-native apps inherently provide co-pilot functionality for all features within the app. Unlike co-pilots, which are typically limited to assisting within specific functionalities, AI-native apps seamlessly integrate AI across the entire app ecosystem. This means you have an AI-powered co-pilot at your disposal for every aspect of the app.
  2. Novel Functionalities: AI-native apps take things a step further by automatically generating novel functionalities. These functionalities are based on the application’s specification, knowledge base, and a set of guardrails provided by the app’s author. The “guardrails” specify which parts of the knowledge base can be used to create novel functionalities. This ensures that AI-native apps enhance rather than disrupt the user experience while adhering to predefined guidelines set by the app’s author.


In essence, AI-native apps are a holistic approach to transforming the way we interact with technology. They don’t just assist users; they enhance every facet of the app, providing guidance and automating tasks seamlessly across the entire application using a simple natural language interface. Plus, their ability to generate novel functionality based on the app’s knowledge base, within the specified guardrails, adds a layer of innovation and efficiency that chatbots and co-pilots simply cannot match.

The Superiority of AI-Native Apps

So, why are AI-native apps superior to chatbots and co-pilots? Here are a few key reasons:

  1. Comprehensive Assistance: AI-native apps provide co-pilot functionality for all features within the app, offering users assistance across the board using a simple natural language interface.
  2. Automated Innovation: AI-native apps generate novel functionalities based on app specifications, enhancing user productivity and problem-solving capabilities while adhering to the app author’s guidelines.


In conclusion, while chatbots, including advanced LLM-based chatbots, and co-pilots have their merits in assisting users within specific functionalities of an application, AI-native apps represent the next frontier in user interaction. They are more versatile, seamless, and efficient, offering a superior user experience that adapts to your needs across the entire app ecosystem. We think with AI-native apps, we are going to witness a revolution in the way we interact with technology, making our digital lives simpler, smarter, and more enjoyable.

Related Blogs:

AI Reasoning: Balancing Generality and Reliability Friends of Lexie Blog Series – July Edition

In the ever-evolving landscape of artificial intelligence, the potential for transformative impact is monumental. According to Goldman Sachs, AI has the potential to exert a staggering $7 trillion influence on the global GDP. However, as AI rapidly advances, a new challenge emerges: the delicate balance between generality and reliability in AI reasoning.

The generality in LLM systems is quite impressive. They are pretty great at learning from data and applying what they have learned to new data. For example, you can use the same AI algorithm to analyze healthcare data, e-commerce data, or insurance data. But current LLM systems still have a lot of limitations, especially when it comes to complex tasks. The main reason for these limitations is that they cannot reason, also known as System 2 Intelligence.

An ideal reasoning module can break down complex tasks into simpler ones, retrieve relevant information in case it does not have enough information, or use other non-AI tools to fulfill user queries if needed. To address this problem, we often put LLM systems inside traditional software structures.  One example of this is the LangChain tool, along with a few other projects. This is a bit like putting a coarse-grained reasoning system into standard software. But this approach limits the generality of AI systems.

On the other hand, there are agentic solutions. These approaches work probabilistically, which means they do not follow strict rules and they are quite generic since they use LLM models themselves for reasoning.  Projects like WebGPT, AutoGPT, and ReAct fall into this category. However, they are not reliable enough for production applications, especially enterprise-grade ones. 

Sure, these two paradigms are influential, but they are not the only ways to explore AI reasoning. The truth is, the lack of a versatile and dependable reasoning engine is a major bottleneck that is holding back countless AI initiatives. It is keeping them stuck in the demo phase, and it is preventing them from delivering a seamless user experience in production systems. 

Introducing Lexie: Enhancing AI Reasoning with Thought Graphs

Meet Lexiea new AI that uses a reasoning representation called Thought Graph (“TG”). The TG and its framework neatly separate the complexities of AI reasoning into four distinct parts: 

  1. Generating a reasoning space
  2. Searching the reasoning space for the best and most reliable option
  3. Executing the reasoning
  4. Refining the reasoning with human feedback.

This separation gives us the flexibility to build a range of reasoning abilities (not just two extremes), striking a careful balance between reliability and generality. In our opinion, a reasoning representation like TG is as essential to AI applications as HTML is to web development.

An Open Representation for Reasoning

TG has been a huge win for us, and it’s now an essential part of our system. It gives us the flexibility to use a variety of reasoning methods, and we can implement essential safety mechanisms no matter which method we choose.

We have been in conversation with a number of industry experts who are working on some very innovative AI projects. Their collective insights have led us to a clear conclusion: open representation of reasoning is the way to go. That is why we are building a community of thought leaders to enrich our Thought Graph and make it open source in the future.

Upcoming Blog Series and Call to Action

We are excited to share a series of blog posts in the coming months that will dive into the insights we have gathered from over two and a half years of building and refining the Thought Graph and deploying it across diverse applications. Whether you are just starting out on your AI journey or you are a seasoned pro, we think you will find this series informative and thought-provoking. Stay tuned!

If you believe in our vision of creating an open representation for reasoning, we would love to connect with you. Please subscribe to our blog series and become a part of the vibrant discourse that is shaping the future of AI reasoning. Together, we can create a future where generality and reliability seamlessly coexist in AI reasoning.

Thanks for your interest and support! We’re dedicated to making every app AI-native in a reliable way. Current AI models and LLMs are not reliable enough for enterprise applications, and their behavior is hard to analyze or explain. We’re working hard to change that.

We’re taking a graph-based approach to grounding our AI model. This representation lets different parties (programmers, product designers, human auditors, software modules, and AI models) analyze, understand, and improve the reasoning. We believe this representation, which we call Thought Graph, will be the foundation of AI-native applications, just like HTML is the foundation of web applications.

As we expand our “Friends of Lexie” list, we would like to welcome you to receive updates about our exciting venture. Please feel free to review our introduction to Lexie in this blog. If you have any questions or feedback, please contact us. We hope you enjoy this article.

Market Demand for AI-Native Apps

Replit’s data indicates that the number of AI projects on their platform has increased 34-fold since last year. A Databricks survey of 9,000 organizations also shows that almost every CEO is asking their business units to develop an AI strategy. Our observations from the market are similar. Our customers want to use AI in their customer support and content creation. They want to deploy AI-native solutions quickly and without allocating too many of their resources.

Lexie Updates

Product Highlights

We learned that our customers are interested in using our internal development tool – Thought Graph Low-code to customize their applications. We have decided to add this tool as part of our offering as well. Please feel free to watch the demo below.

Introduction to Lexie

Lexie is a groundbreaking startup poised to transform the way we interact with applications. With our cutting-edge low-code technology, we are making every app an AI-native app. It can increase the productivity and efficiency of building apps by 20x.

Imagine a user experience where you can effortlessly communicate with any application using natural language, in addition to and in coordination with other input actions (such as tapping, typing, and clicking). Lexie drives the application’s UI on behalf of the user, enabling seamless interaction and automating complex tasks that require touch points across different parts of the application. It eliminates unnecessary interactions and also automatically creates novel interactions if needed. Our customers include large e-commerce providers as well as startups from InsureTech, PropTech, and FinTech sectors.

Elevating User Experience with a Multi-Modal Reasoning Engine

Lexie was founded in 2021 with the vision of building a platform that would allow developers to transform every application into an AI-native application, where users can interact with the application using natural language. The AI agent (initially referred to as an “overlay bot”) performs the necessary reasoning and drives the application automatically, skipping unnecessary user interactions and creating novel UIs as needed. The company was an early entrant into the market. Therefore, we decided to build applications using our development platform rather than presenting the platform itself as the product. Lexie built multiple co-pilots/chatbot agents for its e-commerce customers.

The launch of ChatGPT resulted in a substantial increase in interest in our technology. Customers were impressed by the speed at which Lexie was able to develop software, even though building AI-native software is more challenging than developing traditional software.


The truth is that we kept building our development platform – Thought Graph – and used that to build our customer use cases. Over the last 5 months, Lexie decided to change its strategy and expose Thought Graph to the customers.

Why is Reasoning Important for AI-native Applications?

Large language models (LLMs) are not capable of handling complex scenarios on their own. While their performance is quite impressive when used as System 1 intelligence, they are far from ready for enterprise applications when used as a reasoning engine for System 2 intelligence.


If you follow the advancement of AI research you probably know that reasoning in a generic sense is one of the most important areas that top minds are working hard to solve. Examples are Tree of Thoughts and Thought Cloning which are using existing AI models to improve the reasoning and Self Supervised Learning and Generative Flow Networks which are using completely new models to improve the reasoning.

When it comes to generating and driving the user experience, LLMs have similar shortcomings and therefore we need a robust reasoning engine. However, there are three main considerations:

  1. Since we are solving the problem for a particular domain, we can leverage a fine-tuned version of existing AI models grounded using domain knowledge.
  2. Our main goal is to build a model for a software application to predict the right action(s) given a user command. Software apps have a good amount of documentation and source code that can help to build such a model.
  3. Collecting interaction data and user feedback is as important, if not more important, as the AI model. Such data will be key to improve and evolve the reasoning model.

We recommend that our customers deploy a reasoning engine as part of their applications today, as it is highly reliable and enterprise-ready for basic UI interactions. This will allow them to collect user feedback and improve the reasoning for natural language interactions over time, which we refer to as evolvable reasoning.

Currently, every software application consists of three important components: front-end, back-end, and data layer. We believe that all future applications will have a reasoning engine as the fourth component.

Thought Graph – A Representation for Reasoning

Thought Graph is an intermediate representation of reasoning. We think Thought Graph will be the cornerstone of AI-native applications the same way HTML became the cornerstone of web applications. A Thought Graph server is a reasoning engine that enables applications to process natural language queries, even complex ones. It does this by breaking down the reasoning process into four steps:

  1. Generation of the reasoning space including retrieval of supplementary knowledge
  2. Search in the reasoning space to find the best reasoning alternative
  3. Execution of the chosen alternative(s) for reasoning
  4. Improvement of the reasoning based on reinforcement learning


Thought Graph grounds its reasoning in business-specific knowledge and data. Customers can begin with level 1 reasoning by creating the Thought Graph for their application using our low-code tool. They can then evolve the reasoning to level 2 and level 3 reasoning (analogous to the level of automation in self-driving cars) based on detailed business requirements, as well as their users’ feedback.

When you have the application logic in the form of a Thought Graph, different modality of the applications including web application, chatbot, voice agent, or a mix of them is automatically generated.

The standard architecture for deploying Thought Graph is demonstrated below. Our Thought Graph server sits between the front-end and back-end of the application and leverages Lexie’s proprietary AI model as well as the Thought Graph database to fulfill the application requests.


Lexie vs Other AI Middlewares

Lexie is leveraging open-source projects like LangChain and LlamaIndex. Developers can choose to use LangChain alone (without using our Thought Graph server) to develop their applications. The other options are using platforms like Fixie or Adept. Our Thought Graph is different from the competition in several ways.

  • First, Lexie offers a superior user experience for different modalities – all auto-generated from the same Thought Graph.
  • Second, Thought Graphs can leverage the generality of AI models for reasoning, making them more versatile than other platforms.
  • Third, our low-code technology enables rapid development of AI-native software.
  • Finally Thought Graph architecture is designed for enterprise applications, making it more reliable, analyzable, and scalable than other platforms.

The following figure depicts Lexie’s competitive advantage with respect to generality and reliability, as well as user experience and developer experience.


As you can see, Thought Graph architecture offers a significant advantage over the competition in all four areas. This makes Lexie the clear choice for businesses looking for reliable, scalable, and versatile AI-native applications.

Upcoming Blog Series and Call to Action

We’re excited to share a series of blog posts in the coming months that will dive into the insights we’ve gathered from over two and a half years of refining the Thought Graph and deploying it across diverse applications. Whether you’re just starting out on your AI journey or you’re a seasoned pro, we think you’ll find this series informative and thought-provoking. Stay tuned!

If you’re down with our vision of creating an open representation for reasoning, we’d love to connect with you. You can subscribe to our blog series through this link and become a part of the vibrant discourse that’s shaping the future of AI reasoning. Together, we can create a future where generality and reliability seamlessly coexist in AI reasoning.

© Lexie. All Rights Reserved.