Artificial Intelligence | News, analysis, features, how-tos, and videos
RAG is a pragmatic and effective approach to using large language models in the enterprise. Learn how it works, why we need it, and how to implement it with OpenAI and LangChain.
Learn how to build and deploy a machine-learning data model in a Java-based production environment using Weka, Docker, and REST.
Set up a supervised learning project, then develop and train your first prediction function using gradient descent in Java.
We can dramatically increase the accuracy of a large language model by providing it with context from custom data sources. LangChain makes this integration easy.
Get a hands-on introduction to generative AI with these Python-based coding projects using OpenAI, LangChain, Matplotlib, SQLAlchemy, Gradio, Streamlit, and more.
The advantages of LangChain are clean and simple code and the ability to swap models with minimal changes. Let’s try LangChain with the PaLM 2 large language model.
Here's your chance to use TensorFlow with JavaScript. Train a neural network to predict the rise and fall of Bitcoin prices.
LangChain is one of the hottest platforms for working with LLMs and generative AI—but it's typically only for Python and JavaScript users. Here's how R users can get around that.
Learn how to use Google Cloud Vertex AI and the PaLM 2 large language model to create text embeddings and search text ranked by semantic similarity.
Using the PaLM 2 large language model available in Google Cloud Vertex AI, you can create a chatbot in just a few lines of code. These are the steps.
Sponsored Links