www.mongodb.com Open in urlscan Pro
52.84.229.87  Public Scan

Submitted URL: https://salesloft.mongodb.com/t/109648/sc/0ac77819-d2af-40f1-aa8d-756cd4a5a3b8/NB2HI4DTHIXS653XO4XG233OM5XWIYROMNXW2L3CNRXWOL3...
Effective URL: https://www.mongodb.com/blog/post/building-ai-mongodb-conversation-intelligence-observe-ai
Submission: On June 04 via manual from SG — Scanned from SG

Form analysis 1 forms found in the DOM

GET https://www.mongodb.com/search

<form role="search" method="GET" action="https://www.mongodb.com/search" class="css-1c69emu">
  <div class="css-87svlz">
    <div class="css-36i4c2"><input type="text" placeholder="Search products, whitepapers, &amp; more..." class="css-etrcff"></div>
    <div class="css-v2nqhr">
      <div class="css-aef77t"><button role="button" type="button" class="css-14k7wrz"><span data-testid="selected-value" class="css-6k4l2y">General Information</span>
          <div class="css-109dpaz"><svg data-testid="icon" width="16" height="9" viewBox="0 0 16 9" fill="none" xmlns="http://www.w3.org/2000/svg" class="css-1yzkxhp">
              <path d="M1.06689 0.799988L8.00023 7.73332L14.9336 0.799988" stroke-linecap="round" stroke-linejoin="round" class="css-1tlq8q9"></path>
            </svg></div>
        </button>
        <div class="css-hn9qqo">
          <ul data-testid="options" role="listbox" class="css-ac9zo2">
            <li role="option" tabindex="0" class="css-11dtrvq">General Information</li>
            <li role="option" tabindex="0" class="css-11dtrvq">Documentation</li>
            <li role="option" tabindex="0" class="css-11dtrvq">Developer Articles &amp; Topics</li>
            <li role="option" tabindex="0" class="css-11dtrvq">Community Forums</li>
            <li role="option" tabindex="0" class="css-11dtrvq">Blog</li>
            <li role="option" tabindex="0" class="css-11dtrvq">University</li>
          </ul>
        </div>
      </div><input type="hidden" id="addsearch" name="addsearch">
      <div class="css-1myrko"><button type="submit" tabindex="0" class=" css-13l1z36" data-track="true"><img alt="search icon" src="https://webimages.mongodb.com/_com_assets/cms/krc3hljsdwdfd2w5d-web-actions-search.svg?auto=format%252Ccompress"
            width="18" height="18" class="css-r9fohf"></button></div>
    </div>
  </div>
</form>

Text Content

Blog
{Blog}  
Announced at MongoDB.local NYC 2024: A recap of all announcements and updates —
Learn more >
General Information

 * General Information
 * Documentation
 * Developer Articles & Topics
 * Community Forums
 * Blog
 * University


 * Products
   Platform
   AtlasBuild on a developer data platform
   Platform Services
   DatabaseDeploy a multi-cloud databaseSearchDeliver engaging search
   experiencesVector SearchDesign intelligent apps with GenAIStream
   ProcessingUnify data in motion and data at rest
   Tools
   CompassWork with MongoDB data in a GUIIntegrationsIntegrations with
   third-party servicesRelational MigratorMigrate to MongoDB with confidence
   Self Managed
   Enterprise AdvancedRun and manage MongoDB yourselfCommunity EditionDevelop
   locally with MongoDB
   Build with MongoDB Atlas
   Get started for free in minutes
   Sign Up
   Test Enterprise Advanced
   Develop with MongoDB on-premises
   Download
   Try Community Edition
   Explore the latest version of MongoDB
   Download
 * Resources
   Documentation
   Atlas DocumentationGet started using AtlasServer DocumentationLearn to use
   MongoDBStart With GuidesGet step-by-step guidance for key tasks
   
   Tools and ConnectorsLearn how to connect to MongoDBMongoDB DriversUse drivers
   and libraries for MongoDB
   AI Resources HubGet help building the next big thing in AI with
   MongoDBarrow-right
   Connect
   Developer CenterExplore a wide range of developer resourcesCommunityJoin a
   global community of developersCourses and CertificationLearn for free from
   MongoDBWebinars and EventsFind a webinar or event near you
 * Solutions
   Use cases
   Artificial IntelligenceEdge ComputingInternet of
   ThingsMobilePaymentsServerless Development
   Industries
   Financial ServicesTelecommunicationsHealthcareRetailPublic
   SectorManufacturing
   Solutions LibraryOrganized and tailored solutions to kick-start
   projectsarrow-right
   Developer Data Platform
   Accelerate innovation at scale
   Learn morearrow-right
   Startups and AI Innovators
   For world-changing ideas and AI pioneers
   Learn morearrow-right
   Customer Case Studies
   Hear directly from our users
   See Storiesarrow-right
 * Company
   CareersStart your next adventureBlogRead articles and
   announcementsNewsroomRead press releases and news stories
   PartnersLearn about our partner ecosystemLeadershipMeet our executive
   teamCompanyLearn more about who we are
   Contact Us
   Reach out to MongoDB
   Let’s chatarrow-right
   Investors
   Visit our investor portal
   Learn morearrow-right
 * Pricing

SupportSign In
Try Free
menu-vertical


Home

News

Applied

QuickStart

Updates

Culture

Events

Artificial Intelligence

Engineering Blog

All


BUILDING AI WITH MONGODB: CONVERSATION INTELLIGENCE WITH OBSERVE.AI

Mat Keep
April 29, 2024 | Updated: May 9, 2024
#genAI

What's really happening in your business? The answer to that question lies in
the millions of interactions between your customers and your brand. If you could
listen in on every one of them, you'd know exactly what was up--and down. You’d
also be able to continuously improve customer service by coaching agents when
needed. However, the reality is that most companies have visibility in only 2%
of their customer interactions. Observe.AI is here to change that. The company
is focused on being the fastest way to boost contact center performance with
live conversation intelligence.

Check out our AI resource page to learn more about building AI-powered apps with
MongoDB.

Founded in 2017 and headquartered in California, Observe.AI has raised over
$200m in funding. Its team of 250+ members serves more than 300 organizations
across various industries. Leading companies like Accolade, Pearson, Public
Storage, and 2U partner with Observe.AI to accelerate outcomes from the
frontline to the rest of the business.

The company has pioneered a 40 billion-parameter contact center large language
model (LLM) and one of the industry’s most accurate Generative AI engines.
Through these innovations, Observe.AI provides analysis and coaching to maximize
the performance of its customers’ front-line support and sales teams.



We sat down with Jithendra Vepa, Ph.D, Chief Scientist & India General Manager
at Observe.AI to learn more about the AI stack powering the industry-first
contact center LLM.


CAN YOU START BY DESCRIBING THE AI/ML TECHNIQUES, ALGORITHMS, OR MODELS YOU ARE
USING?

“Our products employ a versatile range of AI and ML techniques, covering various
domains. Within natural language processing (NLP), we rely on advanced
algorithms and models such as transformers, including the likes of
transformer-based in-house LLMs, for text classification, intent and entity
recognition tasks, summarization, question-answering, and more. We embrace
supervised, semi-supervised, and self-supervised learning approaches to enhance
our models' accuracy and adaptability."

"Additionally, our application extends its reach into speech processing, where
we leverage state-of-the-art methods for tasks like automatic speech recognition
and sentiment analysis. To ensure our language capabilities remain at the
forefront, we integrate the latest Large Language Models (LLMs), ensuring that
our application benefits from cutting-edge natural language understanding and
generation capabilities. Our models are trained using contact center data to
make them domain-specific and more accurate than generic models out there.”


CAN YOU SHARE MORE ON HOW YOU TRAIN AND TUNE YOUR MODELS?

“In the realm of model development and training, we leverage prominent
frameworks like TensorFlow and PyTorch. These frameworks empower us to craft,
fine-tune, and train intricate models, enabling us to continually improve their
accuracy and efficiency."

"In our natural language processing (NLP) tasks, prompt engineering and
meticulous fine-tuning hold pivotal roles. We utilize advanced techniques like
transfer learning and gradient-based optimization to craft specialized NLP
models tailored to the nuances of our tasks."


HOW DO YOU OPERATIONALIZE AND MONITOR THESE MODELS?

"To streamline our machine learning operations (MLOps) and ensure seamless
scalability, we have incorporated essential tools such as Docker and Kubernetes.
These facilitate efficient containerization and orchestration, enabling us to
deploy, manage, and scale our models with ease, regardless of the complexity of
our workloads."

"To maintain a vigilant eye on the performance of our models in real-time, we
have implemented robust monitoring and logging to continuously collect and
analyze data on model performance, enabling us to detect anomalies, address
issues promptly, and make data-driven decisions to enhance our application's
overall efficiency and reliability.”


THE ROLE OF MONGODB IN OBSERVE.AI TECHNOLOGY STACK

The MongoDB developer data platform gives the company’s developers and data
scientists a unified solution to build smarter AI applications. Describing how
they use MongoDB, Jithendra says

“OBSERVE.AI processes and runs models on millions of support touchpoints daily
to generate insights for our customers. Most of this rich, unstructured data is
stored in MongoDB. We chose to build on MongoDB because it enables us to quickly
innovate, scale to handle large and unpredictable workloads, and meet the
security requirements of our largest enterprise customers.”


GETTING STARTED

Thanks so much to Jithendra for sharing details on the technology stack powering
Observe.AI’s conversation intelligence and MongoDB’s role.

To learn more about how MongoDB can help you build AI-enriched applications,
take a look at the MongoDB for Artificial Intelligence page. Here, you will find
tutorials, documentation, and whitepapers that will accelerate your journey to
intelligent apps.


← Previous


BUILDING AI WITH MONGODB: INTEGRATING VECTOR SEARCH AND COHERE TO BUILD FRONTIER
ENTERPRISE APPS

Cohere is the leading enterprise AI platform, building large language models
(LLMs) which help businesses unlock the potential of their data. Operating at
the frontier of AI, Cohere’s models provide a more intuitive way for users to
retrieve, summarize, and generate complex information. Cohere offers both text
generation and embedding models to its customers. Enterprises running
mission-critical AI workloads select Cohere because its models offer the best
performance-cost tradeoff and can be deployed in production at scale. Cohere’s
platform is cloud-agnostic. Their models are accessible through their own API as
well as popular cloud managed services, and can be deployed on a virtual private
cloud (VPC) or even on-prem to meet companies where their data is, offering the
highest levels of flexibility and control. Cohere’s leading Embed 3 and Rerank 3
models can be used with MongoDB Atlas Vector Search to convert MongoDB data to
vectors and build a state-of-the-art semantic search system. Search results also
can be passed to Cohere’s Command R family of models for retrieval augmented
generation (RAG) with citations. Check out our AI resource page to learn more
about building AI-powered apps with MongoDB. A new approach to vector embeddings
It is in the realm of embedding where Cohere has made a host of recent advances.
Described as “AI for language understanding,” Embed is Cohere’s leading text
representation language model. Cohere offers both English and multilingual
embedding models, and gives users the ability to specify the type of data they
are computing an embedding for (e.g., search document, search query). The result
is embeddings that improve the accuracy of search results for traditional
enterprise search or retrieval-augmented generation. One challenge developers
faced using Embed was that documents had to be passed one by one to the model
endpoint, limiting throughput when dealing with larger data sets. To address
that challenge and improve developer experience, Cohere has recently announced
its new Embed Jobs endpoint . Now entire data sets can be passed in one
operation to the model, and embedded outputs can be more easily ingested back
into your storage systems. Additionally, with only a few lines of code, Rerank 3
can be added at the final stage of search systems to improve accuracy. It also
works across 100+ languages and offers uniquely high accuracy on complex data
such as JSON, code, and tabular structure. This is particularly useful for
developers who rely on legacy dense retrieval systems. Demonstrating how
developers can exploit this new endpoint, we have published the How to use
Cohere embeddings and rerank modules with MongoDB Atlas tutorial . Readers will
learn how to store, index, and search the embeddings from Cohere. They will also
learn how to use the Cohere Rerank model to provide a powerful semantic boost to
the quality of keyword and vector search results. Figure 1: Illustrating the
embedding generation and search workflow shown in the tutorial Why MongoDB Atlas
and Cohere? MongoDB Atlas provides a proven OLTP database handling high read and
write throughput backed by transactional guarantees. Pairing these capabilities
with Cohere’s batch embeddings is massively valuable to developers building
sophisticated gen AI apps. Developers can be confident that Atlas Vector Search
will handle high scale vector ingestion, making embeddings immediately available
for accurate and reliable semantic search and RAG. Increasing the speed of
experimentation, developers and data scientists can configure separate vector
search indexes side by side to compare the performance of different parameters
used in the creation of vector embeddings. In addition to batch embeddings,
Atlas Triggers can also be used to embed new or updated source content in real
time, as illustrated in the Cohere workflow shown in Figure 2. Figure 2: MongoDB
Atlas Vector Search supports Cohere’s batch and real time workflows. (Image
courtesy of Cohere) Supporting both batch and real-time embeddings from Cohere
makes MongoDB Atlas well suited to highly dynamic gen AI-powered apps that need
to be grounded in live, operational data. Developers can use MongoDB’s
expressive query API to pre-filter query predicates against metadata, making it
much faster to access and retrieve the more relevant vector embeddings. The
unification and synchronization of source application data, metadata, and vector
embeddings in a single platform, accessed by a single API, makes building gen AI
apps faster, with lower cost and complexity. Those apps can be layered on top of
the secure, resilient, and mature MongoDB Atlas developer data platform that is
used today by over 45,000 customers spanning startups to enterprises and
governments handling mission-critical workloads. What's next? To start your
journey into gen AI and Atlas Vector Search, review our 10-minute Learning Byte
. In the video, you’ll learn about use cases, benefits, and how to get started
using Atlas Vector Search.

April 25, 2024
Next →


MICROSERVICES: REALIZING THE BENEFITS WITHOUT THE COMPLEXITY

The microservice architecture has emerged as the preferred, modern approach for
developers to build and deploy applications on the cloud. It can help you
deliver more reliable applications, and address the scale and latency concerns
for System Reliability Engineers (SREs) and operations. But microservices aren't
without their hangups. For developers, microservices can lead to additional
complexity and cognitive overhead, such as cross-service coordination, shared
states across multiple services, and coding and testing failure logic across
disconnected services. While the monolith was suboptimal for compute and scale
efficiencies, the programming model was simple. So the question is, can we get
the best of both worlds? In addition, how do we make the individual services
easier to build and adapt to changing requirements? Since, at their core,
microservices provide access to and perform operations on data, how do we
architect services so that developers can easily work with data? How can we make
it easier for developers to add new types of data and data sources and perform a
wide variety of data operations without the complexity of managing caches and
using multiple query languages (SQL, full-text and vector search, time series,
geospatial, etc.) The development complexity associated with microservice
architectures occurs at two levels: service orchestration and service data
management. The diagram below depicts this complexity. At the orchestration
level, a typical application may support tens or hundreds of processes, and each
may have thousands or millions of executions. To make this work, services are
often connected by a patchwork of queues. Developers spend quite a bit of time
tracking and managing all the various workflows. The sheer scale necessitates a
central mechanism to manage concurrent tasks and sharded databases to manage the
state of millions of concurrent workflow instances. To add more complexity, each
microservice is deployed using a set of data platforms including RDBMS, caches,
search engines, and vector and NoSQL databases. Developers must work with
multiple query languages, write code to keep data in sync among these platforms
and write code to deal with edge cases when invariably data or indexes are not
in sync. Finally, developer productivity is inhibited by the brittleness of
RDBMS, which lacks flexibility when trying to incorporate new or changing data
types. As a result, microservice applications often end up with complex
architectures that are difficult to develop against and maintain in terms of
both the individual microservices and the service orchestration. Realizing the
benefits without the complexity One approach to address these microservice
challenges is to combine two technologies: Temporal and MongoDB. Both give you
the benefits of microservices while simplifying the implementation of service
orchestration. Together, they allow developers to build services that can easily
handle a wide variety of data, eliminate the need to code for complex
infrastructure and reduce the likelihood of failure. They simplify the data
model and your code. In one real-world example, open-source indexing company
Mixpeek leverages the combination of MongoDB and Temporal to provide a platform
enabling organizations to easily incorporate multi-modal data sources in AI
applications. Mixpeek’s CEO Ethan Steininger states, “Temporal’s durable
execution guarantees and MongoDB's flexible data model are core components of
Mixpeek’s multimodal data processing and storage. Combined, they enable our
users to run high volume ML on commodity hardware without worrying about dropped
jobs.” MongoDB and Temporal: Build like a monolith with durable microservices
Both MongoDB and Temporal were built by developers, for developers. They both
use a code-first approach to solving the complex infrastructure needs of our
modern applications within our application code and empower developers to be
more productive. They are part of an emerging development stack that greatly
simplifies data and all the cross-functional coordination we need in our cloud
applications. Ultimately, the combination of these two developer-focused
platforms allows you to simplify design, development, and testing of
microservice-based applications. With the document model of MongoDB, you model
data as real world objects and not tables, rows, and columns. With Temporal, you
design your end-to-end service flows as workflows as described by domain experts
without having to explicitly identify every edge case and exception (Temporal
handles those implicitly). Temporal and MongoDB provide the same benefits that,
when combined, multiply in value. You become more agile, as not only can
everyone understand your code better, but you are no longer challenged by the
cognitive overload of trying to coordinate, comprehend, and test a web of
disconnected and complex services. Together, they allow us to reliably
orchestrate business processes within apps that are all speaking the language of
the data itself. Combining Temporal and MongoDB results in the simplified
architecture shown below. Temporal enables orchestration to be implemented at a
higher level of abstraction, eliminating much of the event management and
queuing complexity. MongoDB, in turn, provides a single integrated data platform
with a unified query language thereby eliminating much of the data management
complexity. Let’s examine MongoDB and Temporal in more depth to better
understand their capabilities and why they facilitate the rapid development of
microservices-based applications. MongoDB: Simplifying microservice data
MongoDB's features align well with the principles of microservices
architectures. It reduces the need for niche databases and the associated costs
of deploying and maintaining a complicated sprawl of data technologies. More
explicitly, MongoDB delivers key benefits for microservice development: Flexible
schema, flexible services: Unlike relational databases with rigid schemas,
MongoDB's document model allows microservices to easily evolve as data
requirements change. Distributed scale for data-heavy, distributed services:
MongoDB scales horizontally by adding more partitions to distribute the load.
This aligns with the modular nature of microservices, where individual services
can scale based on their specific needs. Unified query language reduces
microservice sprawl: MongoDB supports a diverse set of data operations without
requiring multiple data platforms (caches, vector, and text search engines, time
series, geospatial, etc.) Operational efficiency: MongoDB Atlas, the cloud-based
version of MongoDB, simplifies managing databases for microservices. It handles
provisioning, backups, and patching, freeing developers to focus on core
responsibilities. Integrated developer data platform: The integrated developer
data platform delivers an intuitive set of tools to build services that support
mobile clients, real-time analytics, data visualization, and historical analysis
across many service databases. With MongoDB, development teams use one interface
for all their services and run it anywhere, even across clouds. Also, it
provides a data foundation for your microservices that is highly available,
scalable, and secure. It greatly simplifies microservices development so that
you can focus on your business problems and not data. Temporal: Don't coordinate
services, orchestrate them Temporal delivers an open-source, durable execution
solution that removes the complexity of building scalable distributed
microservices. It presents a development abstraction that preserves the complete
application state so that in the case of a host or software failure, it can
seamlessly migrate execution to another machine. This means you can develop
applications as if failures—like network outages or server crashes—do not exist.
Temporal handles these issues, allowing you to focus on implementing business
logic rather than coding complex failure detection and recovery routines. Here's
how Temporal simplifies application development: Durable workflows: Temporal
maintains the state and progress of a defined workflow across multiple services,
even in the face of server crashes, network partitions, and other types of
failures. This durability ensures that your application logic can resume where
it left off, making your overall application more resilient. Simplifies failure
handling: Temporal abstracts away the complex error handling and retry logic
that developers typically have to implement in distributed systems. This
abstraction allows developers to focus on business logic rather than the
intricacies of ensuring their end-to-end services can handle failures
gracefully. Scale: Temporal applications are inherently scalable and capable of
handling billions of workflow executions. Long-running services: Temporal
supports long-running operations, from seconds to years, with the same level of
reliability and scalability. By providing a platform that handles the
complexities of distributed systems, Temporal allows developers to concentrate
on implementing business logic in their services. This focus can lead to faster
development times and more reliable applications, as developers are not bogged
down by the intricacies of state management, retries, and error handling. The
next generation of microservices development is here Developers want to code.
They want to solve business problems. They do not want to be bogged down by the
complexity of infrastructure failures. They want to model their apps and data so
that it is aligned with the real-world entities and domains they are solving
for. Using MongoDB and Temporal together solves these complexities. Together,
they simplify design, development, and testing of microservice-based
applications so that you can focus on business problems and deliver more
features faster. Getting started with Temporal and MongoDB Atlas We can help you
design the best architecture for your organization’s needs. Feel free to connect
with your MongoDB and Temporal account teams or contact us to schedule a
collaborative session and explore how Temporal and MongoDB can optimize your AI
development process.

June 3, 2024



© 2024 MongoDB, Inc.

About

 * Careers
 * Investor Relations
 * Legal Notices
 * Privacy Notices
 * Security Information
 * Trust Center

Support

 * Contact Us
 * Customer Portal
 * Atlas Status
 * Customer Support

Social

 * GitHub
 * Stack Overflow
 * LinkedIn
 * YouTube
 * X
 * Twitch
 * Facebook

© 2024 MongoDB, Inc.



PRIVACY PREFERENCE CENTER

"Cookies" are small files that enable us to store information while you visit
one of our websites. When you visit any website, it may store or retrieve
information on your browser, mostly in the form of cookies. This information
might be about you, your preferences or your device and is mostly used to make
the site work as you expect it to. The information does not usually directly
identify you, but it can give you a more personalized web experience. Because we
respect your right to privacy, you can choose not to allow some types of
cookies, but essential cookies are always enabled. Click on the different
category headings to find out more and change our default settings. However,
blocking some types of cookies may impact your experience of the site and the
services we are able to offer.
MongoDB Privacy Policy
Allow All


MANAGE CONSENT PREFERENCES

STRICTLY NECESSARY COOKIES

Always Active

These cookies are necessary for the website to function and cannot be switched
off in our systems. They are usually only set in response to actions made by you
which amount to a request for services, such as setting your privacy
preferences, logging in or filling in forms. You can set your browser to block
or alert you about these cookies, but some parts of the site will not then work.
These cookies do not store any personally identifiable information.

PERFORMANCE COOKIES

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and
improve the performance of our site. They help us to know which pages are the
most and least popular and see how visitors move around the site. All
information these cookies collect is aggregated and therefore anonymous. If you
do not allow these cookies we will not know when you have visited our site, and
will not be able to monitor its performance.

FUNCTIONAL COOKIES

Functional Cookies

These cookies enable the website to provide enhanced functionality and
personalisation. They may be set by us or by third party providers whose
services we have added to our pages. If you do not allow these cookies then some
or all of these services may not function properly.

TARGETING COOKIES

Targeting Cookies

These cookies may be set through our site by our advertising partners. They may
be used by those companies to build a profile of your interests and show you
relevant adverts on other sites. They do not store directly personal
information, but are based on uniquely identifying your browser and internet
device. If you do not allow these cookies, you will experience less targeted
advertising.

SOCIAL MEDIA COOKIES

Social Media Cookies

These cookies are set by a range of social media services that we have added to
the site to enable you to share our content with your friends and networks. They
are capable of tracking your browser across other sites and building up a
profile of your interests. This may impact the content and messages you see on
other websites you visit. If you do not allow these cookies you may not be able
to use or see these sharing tools.

Back Button


COOKIE LIST



Search Icon
Filter Icon

Clear
checkbox label label
Apply Cancel
Consent Leg.Interest
checkbox label label
checkbox label label
checkbox label label

Confirm My Choices