Getting Started with Solar Chat
Last updated
Last updated
Want to automate tasks like text generation, summarization, and comprehension using Solar LLM? If so, let’s dive into how you can build your conversational AI system using the Solar Chat API.
This guide will walk you through using the Solar LLM via API to perform tasks such as text generation, summarization, and question answering. We'll go beyond understanding what an API is—you'll also learn how to construct messages, handle streaming responses, and apply prompt engineering techniques.
You can integrate a powerful language model into your service with just a few simple configurations!
Upstage’s Solar series offers lightweight but high-performing large language models (LLMs) for various language tasks, such as text generation, document summarization, and question answering.
The Solar Chat API provides access to these models via a web-based API, making it easy for anyone to integrate and utilize them in their own applications.
Solar Chat is compatible with the OpenAI API format.
The key functionality centers around the chat.compeltions.create()
method, where you send a list of messages and receive a model-generated response.
This is the most basic usage pattern — a single question followed by a single response.
In a single-turn structure, there is only one cycle: User → Model → Response.
The model does not retain any context from previous interactions. It simply responds to the message provided in that one request.
✅ Elements
model
Specify the name of the Solar model you wish to use - either solar-pro or solar-mini.
messages
Provide a list of messages that define the conversation history.
Each message should be a dictionary with two required keys:
role
: Specifies who is speaking (e.g., "user"
, "assistant"
, "system"
).
content
: Contains the text of the message.
stream
Decide whether to receive the model’s response all at once (stream=False
) or incrementally as it's being generated (stream=True
).
When stream=True
, responses are delivered in a real-time streaming fashion, which can be useful for chat-like interactions or when you want faster feedback.
Replace "up_your_api_key_here"
with your actual API key (e.g., "up_xxxx"
).
The model
parameter must be set to either solar-pro
or solar-mini
.
If you're participating in the AI Initiative Program, you can use solar-pro
for free until March 31st, 2026.
stream = True
Outputs the model’s response line by line in real time as it is being generated.
This allows for a smoother user experience, especially with more extended responses — no need to wait for the full output.
stream = False
The model generates the entire response first, then returns it all simultaneously.
This approach is perfectly fine for simple or short tasks.
This setup allows for multiple rounds of interaction, where the model can remember and build on previous exchanges.
The model maintains conversational context, enabling a more natural and coherent dialogue.
It generates responses by referring to all previous messages in the conversation history.
🧠 To help the model understand the flow of the conversation, you need to accumulate all previous messages in the messages
list.
✅ Structure
The messages
list stores the entire dialogue history as an array.
Each turn follows a consistent pattern: user
→ assistant
→ user
→ assistant
, and so on.
Prompts tell the model what role to play and how to respond.
A message structure has three roles, including a section where you can define the prompt.
This sets the model's overall tone, personality, and behavior rules.
You can think of it as instructing the model on how it should behave throughout the conversation.
Examples:
"You are a kind and polite assistant."
→ Responses will be friendly and respectful.
"You are a strict grammar teacher."
→ The model will correct grammar and focus on accuracy.
🎯 The same question can produce different answers depending on the system prompt!
This part contains the user’s question or instruction — essentially, what you want the model to do.
Examples:
“What’s the weather like in Seoul today?”
“Summarize this post in 3 sentences.”
🎯 The LLM uses the user's message as the primary input to generate its response.
This part contains the model's generated response or output.
Include assistant messages if you want the model to remember what it said previously and keep the conversation coherent across multiple turns.
Examples:
“The weather in Seoul today is sunny with a temperature of 24°C.”
“Here’s a 3-sentence summary of the post…”
🎯 Including assistant
messages helps the model retain context and continue the conversation naturally.
Experience a live demo to see how different system prompts affect the model’s responses.
You can:
Enter a custom system prompt to define the model’s personality or behavior.
Compare how the model responds based on different prompt settings.
Download the entire conversation as a CSV file to save and review the results.
Left Panel: Uses the default system prompt (You are a helpful assistant.
).
Right Panel: Uses a custom prompt entered by the user.
→ Try assigning the model a specific persona or role!
Compare the model's response to the same question under two distinct prompt settings.
You can check out the full code here: ⇒ [LINK]
Download the left and right responses as a .csv
file.
Useful for saving conversation logs and conducting comparative analysis.
👉 Want to customize the demo to suit your needs?
You can fork it on Hugging Face and create your own Streamlit Space for editing and deployment!
In this section, you learned how to get started with the Solar Chat API.
🔹 A step-by-step walkthrough from issuing an API key → connecting to a Solar model → handling conversations
🔹 How to implement both single-turn and multi-turn chat flows
🔹 Overview of the different roles used in the Chat API and how to apply them effectively
🔹 A hands-on demo comparing the effects of different system prompts using sample code
YoungHoon Jeon | AI Edu | Upstage
➡️ For more in-depth instructions on using Solar Chat, check out the official .
Interested in joining? Apply here:
Curious about how to write great prompts for Solar? Check out the solar-prompt-cookbook here:
👉