Home Back

How next-gen AI technology is transforming daily work for employees

venturebeat.com 3 days ago
Image Credit: Adobe

Presented by MongoDB

The outcome of many AI investments remains uncertain, but one area where AI is proving itself is workforce productivity. A recent Harvard Business School study found that workers are significantly more productive with generative AI. In fact, workers using gen AI can complete tasks 25% more quickly — and produce higher-quality results. Generative AI is also transforming the “soft skills” that 92% of executives say are increasingly crucial, making employees both better at their jobs and happier at work.

The productivity advantages that AI brings to organizations, from more effective and creative collaboration, to reskilling and upskilling, to filling emerging roles, are proving to be a significant differentiator for companies. It also represents a huge opportunity for ≈

But to tap into the full potential of gen AI, organizations should feed their own data into large language models (LLMs). This is where the right database solution coupled with retrieval-augmented generation (RAG) can address the limitations of LLMs, such as out-of-date outputs.

“Employee productivity has emerged as a key area where gen AI can make an immediate impact on organizations of all shapes and sizes,” says Peder Ulander, Chief Marketing and Strategy Officer at MongoDB. “Being able to leverage proprietary operational data is the key to unlocking gen AI’s full potential, and MongoDB is proud to enable forward-thinking startups like 4149, Arc53 and Lavender to build AI-powered applications that automate and personalize common tasks.”

4149: An AI teammate who has your back

“Work is about how people come together and get things done, and generative AI offers an incredible opportunity to help people work better together,” says Adrian Vatchinsky, co-founder and CEO of 4149. “Our vision is that every team exceeds their goals by working alongside an AI teammate that elevates their work — from putting together research reports on people and companies the team is scheduled to meet with, to flagging tasks the team might be procrastinating on, to even shouting out someone for their awesome contributions that week.”

To achieve that vision, the 4149 team built a proactive AI agent. One that does not await on a person to assign it a task, rather assigns itself tasks by understanding what the team is in need of at that point in time.

At the core of this AI teammate is the reflection system. This programming technique gives 4149 the ability to not just summarize items like emails and documents, but also to create insightful takeaways from any sort of project it’s given access to in real time. It uses a custom-built AI-agent framework which embeds models and LLMs from OpenAI, Anthropic and soon self-hosted models. 4149 chose MongoDB and their Atlas Vector Search capability as the underlying database technology for the platform because it perfectly suited their needs.

The platform’s models process project documentation and team interactions, collecting reflections, work insights and associated vector embeddings into Atlas Vector Search along the way. Storing these takeaways alongside vector embeddings in the same database eliminates unnecessary data redundancies. This expedites data access and simplifies the technology stack. Higher-tier insights are stored in Atlas, which makes up the momentum pipeline that 4149 uses to determine its next move. Plus, the question-answer pairs pull directly from the Atlas store, so it serves as each 4149 teammate’s brain or memory.

“Given the reflections stored in Atlas are not just summaries of chats and meetings but rather structured data, being able to query them using a mixture of both vector embeddings and using MongoDB’s rich query framework has been a boost to our productivity and helped us reduce data redundancy,” Vatchinsky says. “Now our focus is on making sure that the tasks and interactions the human teammates have with their 4149 teammates are as impactful and meaningful as possible.”

DocsGPT: Developer documentation made easy

DocsGPT, developed by AI champion Arc53, is an open-source documentation assistant that takes the form of a helpful chatbot sitting at a developer’s elbow. Its aim is to help developers build end-user conversational experiences like customer service chatbots or a natural-language interface on top of a company’s knowledge base. As a platform-agnostic, end-to-end open-source tool built for enterprise organizations, DocsGPT lets developers use local LLMs for enhanced security and privacy, as well as the data sources of their choice. Plus, it’s customizable and features a simple API for building extensions and integrating into existing systems.

Arc53 turned to MongoDB to solve one of DocsGPT’s main challenges: the need to quickly iterate on vector indexes. This requires a tool that supports having multiple vectors on the same text data so that they can evaluate retrieval quality across different embeddings. MongoDB is able to provide that capability. Building gen AI apps rapidly and at low cost and complexity is critical. Having source data, metadata and vector search synchronized and accessible via a single API on a unified platform enables this.

They also use MongoDB Atlas for storage at the application layer because it is simple and flexible. This enables rapid adaptation as the tool evolves. It can store data no matter the structure, scale to huge data sets and make it easier for developers and data scientists to deliver better AI-driven solutions faster.

“Conservatively, users have reported a 20% increase in productivity when utilizing AI-assisted documentation chatbots,” says Alex Tushynski, co-founder of Arc53. “If you aim to achieve the best results in information retrieval, iterating on your vectors and embeddings is critical. MongoDB’s vector search provides the necessary resources for this.”

Lavender: Sales emails that actually get results

“Lavender’s goal is to help anyone write personalized, better-targeted and high-quality emails faster, to increase reply rates from customers and make email a more powerful cold outreach tool,” says Jared Smith, CISO of Lavender.

A well-crafted personal email can take fifteen to twenty minutes to write; with the help of Lavender, that time can be reduced to three to five minutes. However, Lavender does more than automate the writing process. Using OpenAI’s GPT LLMs and ChatGPT, the tool acts as a writing coach, collaborating with the user to generate personalized email copy, write headlines, remove jargon, simplify sentences, optimize formatting and more. Additionally, to help improve quality, copy is analyzed and scored as it is written, pulling from an array of successful email examples.

“Getting that signal to noise ratio out of very unstructured emails is what we’re good at,” Smith says. “We want to be more surgical with it. We want to use your historical data. What’s actually made people respond? And move that forward with the future.”

“Engagement is the most important metric, and we often see reply rates increase anywhere from 200 to 300 percent,” Smith noted.

The platform is built on MongoDB Atlas running on Google Cloud. Lavender chose MongoDB because of the flexibility of its document data model. This lets Lavender add fields on-demand without lengthy schema migrations, and store data of any structure. This makes it the perfect engine for extracting data taken from emails, which is messy and unstructured.

“MongoDB helped us provide structure to that unstructured data set in a way that relational databases just don’t allow for, or don’t make it easy to do. It’s done a tremendous job for us in scaling up to billions of records in our database,” he says. “And Atlas Vector Search has allowed us to search across the saved metadata for all of these emails and pull out much more deep, impactful insights, using much more natural language processing to improve results.”

Taking LLMs to the next level

“MongoDB is built on the flexible document model with native vector search capabilities. This makes it incredibly easy for our customers to build RAG-powered, next-gen applications. Our goal is to empower every organization to innovate with data, and it’s exciting to see how 4149, Lavender and Arc53 are able to use gen AI to help teams work more efficiently,” MongoDB’s Peder Ulander noted.

Indeed, 4149, Lavender and Arc53 demonstrate what is possible when an efficient use of proprietary data is combined with the latest powerful LLMs. While virtually every organization can access the growing number of LLMs out there, only businesses have access to their own data.

With capabilities like MongoDB Atlas Vector Search, businesses can leverage RAG architectures to ensure that their modern AI applications provide contextual, up-to-date data for more accurate responses. Furthermore, choosing the correct database — with robust vector capabilities — can help businesses and employees alike make the most out of their AI investments.

The takeaway: innovative AI features are transforming how developers work — and developers are creating the innovative AI applications that are transforming how the world works. However, it all depends on making the most from your data.

People are also reading