Newsletter
Magazine Store

30 Fastest Growing Tech Companies 2023

Build incredible products with world-class language AI: Cohere

thesiliconreview-aidan-gomez-ceo-cohere-23.jpg

Cohere empowers every developer and enterprise to build amazing products and capture true business value with language AI. At Cohere, they believe that the union of research and product will realize a world where technology commands language in a way that’s as compelling and coherent as themselves. Cohere lives at the forefront of ML/AI research to bring the latest advancements in language AI to its platform. Cohere’s cutting-edge large language models are built on Transformer architecture and trained on supercomputers, providing NLP solutions that don’t need expensive ML development. With a world-class team of experts, they are dedicated to helping companies revolutionize their operations and maximize potential in real-world business applications.

Searching for information using traditional keyword-based search systems can be frustrating. You type in a phrase and get back a list of results that has little to do with what you are looking for. It's like trying to find a needle in a haystack.

In contrast, a semantic-based search system can contextualize the meaning of a user's query beyond keyword relevance, allowing it to return more relevant and accurate results.

But a complete migration to semantic-based search using embeddings is challenging for many companies. Their keyword-based search system has been in place for a long time, and it is often an important part of the company’s information architecture. Migrating to a vector database that supports embedding-based search is, in many cases, just not feasible.

The Cohere Rerank endpoint is designed to bridge this gap. And what’s more, Rerank delivers much higher quality results than embedding-based search, and it requires only a single line of code change in your application.

Introducing the Cohere Rerank Endpoint

Cohere is excited to announce the availability of its  Rerank endpoint, which acts as the last stage of a search flow to provide a ranking of relevant documents per a user’s query. This means that companies can retain an existing keyword-based (also called “lexical”) or semantic search system for the first-stage retrieval and integrate the Rerank endpoint in the second stage re-ranking. When using with a keyword-based search engine, such as Elasticsearch, OpenSearch, or Solr, the Rerank endpoint can be added to the end of an existing search workflow and will allow users to incorporate semantic relevance into their keyword-based search system without changing the existing infrastructure. This is an easy and low-complexity method of improving search results by introducing semantic search technology into a user’s stack with a single line of code.

Cohere offers an API to add cutting-edge language processing to any system. Cohere trains massive language models and puts them behind a simple API. Moreover, through training, users can create massive models customized to their use case and trained on their data. This way, Cohere handles the complexities of collecting massive amounts of text data, the ever evolving neural network architectures, distributed training, and serving models around the clock.

Using the Playground

Generate

  • Try tinkering with different temperature and token-picking settings to alter the model's output behavior.
  • To further improve your generations or to get the model to focus on generating text about a specific topic, try uploading a sample text to train the model. If you're interested in training a model, please submit a Full Access request from your Cohere Dashboard.

Try asking the model to do any of the following:

  • Summarize a paragraph of text
  • Generate SEO tags for a blog post
  • Produce some questions for your next trivia night
  • Provide ideas of what to do in your city this weekend

In each case, give the model a few examples your desired output.

Additionally, note the Show Likelihood button within Advanced Parameters. This feature outputs the likelihood that each token would be generated by the model in the given sequence, as well as the average log-likelihood of each token in the input. Token likelihoods can be retrieved from Generate endpoint.

The log-likelihood is useful for evaluating model performance, especially when testing user-trained models. If you're interested in training a model, please submit a Full Access request from your Cohere Dashboard.

Embed

Using Embed in the Playground enables users to assign numerical representations to strings and visualize comparative meaning on a 2-dimensional plane. Phrases similar in meaning should ideally be closer together on this visualization. Add a couple of your own phrases and see if the Playground visualization feels accurate to you.

Larger models are more capable of complex tasks but smaller models have faster response times and are less expensive. Here is a rough guideline for which model size to use for various tasks:

Generation models

Command
Command is the most capable generative model and can perform any task other models can with better results. This model is well suited for challenging tasks including complex extraction, rewriting, question-answering, summarization, conversation, and brainstorming.

Command-light
Command Light provides a great tradeoff between power and speed. Use this model to power tasks like generating marketing ad-copy, extracting key entities from text, or powering conversational agents.

Aidan Gomez, Co-Founder & CEO

“Cohere’s large language models unleash powerful capabilities, like content generation, summarization, and search — all at massive scale.”

NOMINATE YOUR COMPANY NOW AND GET 10% OFF