Getting Started with Solar Chat
1. Introduction to Getting Started with Solar Chat
Want to automate tasks like text generation, summarization, and comprehension using Solar LLM? If so, let’s dive into how you can build your conversational AI system using the Solar Chat API.
This guide will walk you through using the Solar LLM via API to perform tasks such as text generation, summarization, and question answering. We'll go beyond understanding what an API is—you'll also learn how to construct messages, handle streaming responses, and apply prompt engineering techniques.
You can integrate a powerful language model into your service with just a few simple configurations!
➡️ For more in-depth instructions on using Solar Chat, check out the official documentation.
2. What is Solar Chat API?
Upstage’s Solar series offers lightweight but high-performing large language models (LLMs) for various language tasks, such as text generation, document summarization, and question answering.
The Solar Chat API provides access to these models via a web-based API, making it easy for anyone to integrate and utilize them in their own applications.
Solar Chat is compatible with the OpenAI API format.
The key functionality centers around the chat.compeltions.create()
method, where you send a list of messages and receive a model-generated response.
3. Getting Started with Solar Chat
✅ 3.1. Single-turn Chat
This is the most basic usage pattern — a single question followed by a single response.
In a single-turn structure, there is only one cycle: User → Model → Response.
The model does not retain any context from previous interactions. It simply responds to the message provided in that one request.
# pip install openai
from openai import OpenAI # openai==1.52.2
client = OpenAI(
api_key="up_*************************qP9B",
base_url="<https://api.upstage.ai/v1>"
)
stream = client.chat.completions.create(
model="solar-pro",
messages=[
{
"role": "user",
"content": "Hi, how are you?"
}
],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
# Use with stream=False
# print(stream.choices[0].message.content)
✅ Elements
model
Specify the name of the Solar model you wish to use - either solar-pro or solar-mini.
messages
Provide a list of messages that define the conversation history.
Each message should be a dictionary with two required keys:
role
: Specifies who is speaking (e.g.,"user"
,"assistant"
,"system"
).content
: Contains the text of the message.
stream
Decide whether to receive the model’s response all at once (stream=False
) or incrementally as it's being generated (stream=True
).
When
stream=True
, responses are delivered in a real-time streaming fashion, which can be useful for chat-like interactions or when you want faster feedback.
⚠️ Important Notes
Replace
"up_your_api_key_here"
with your actual API key (e.g.,"up_xxxx"
).The
model
parameter must be set to eithersolar-pro
orsolar-mini
.
If you're participating in the AI Initiative Program, you can use solar-pro
for free until March 31st, 2026.
Interested in joining? Apply here: LINK
☑️ Understanding the stream Parameter
stream = True
Outputs the model’s response line by line in real time as it is being generated.
This allows for a smoother user experience, especially with more extended responses — no need to wait for the full output.
stream = False
The model generates the entire response first, then returns it all simultaneously.
This approach is perfectly fine for simple or short tasks.
🔁 3.2. Multi-turn Conversation
This setup allows for multiple rounds of interaction, where the model can remember and build on previous exchanges.
The model maintains conversational context, enabling a more natural and coherent dialogue.
It generates responses by referring to all previous messages in the conversation history.
def chat_with_solar(messages):
response = client.chat.completions.create(
model="solar-pro",
messages=messages
)
return response.choices[0].message['content']
# Start conversation
messages = [{"role": "user", "content": "Hello, who won the world series in 2020?"}]
response = chat_with_solar(messages)
print("Assistant:", response)
messages.append({"role": "assistant", "content": response})
# Next dialogue
messages.append({"role": "user", "content": "Where was it played?"})
response = chat_with_solar(messages)
print("Assistant:", response)
🧠 To help the model understand the flow of the conversation, you need to accumulate all previous messages in the messages
list.
✅ Structure
The
messages
list stores the entire dialogue history as an array.Each turn follows a consistent pattern:
user
→assistant
→user
→assistant
, and so on.
4. 📌 Prompt and Role Settings
Prompts tell the model what role to play and how to respond.
A message structure has three roles, including a section where you can define the prompt.
4.1. System Prompt
This sets the model's overall tone, personality, and behavior rules.
You can think of it as instructing the model on how it should behave throughout the conversation.
Examples:
"You are a kind and polite assistant."
→ Responses will be friendly and respectful."You are a strict grammar teacher."
→ The model will correct grammar and focus on accuracy.
🎯 The same question can produce different answers depending on the system prompt!
4.2. User Prompt
This part contains the user’s question or instruction — essentially, what you want the model to do.
Examples:
“What’s the weather like in Seoul today?”
“Summarize this post in 3 sentences.”
🎯 The LLM uses the user's message as the primary input to generate its response.
Curious about how to write great prompts for Solar? Check out the solar-prompt-cookbook here: LINK
4.3. Assistant
This part contains the model's generated response or output.
Include assistant messages if you want the model to remember what it said previously and keep the conversation coherent across multiple turns.
Examples:
“The weather in Seoul today is sunny with a temperature of 24°C.”
“Here’s a 3-sentence summary of the post…”
🎯 Including assistant
messages helps the model retain context and continue the conversation naturally.
5. ✍️ Comparing Responses Based on System Prompts (Demo)
Experience a live demo to see how different system prompts affect the model’s responses.
You can:
Enter a custom system prompt to define the model’s personality or behavior.
Compare how the model responds based on different prompt settings.
Download the entire conversation as a CSV file to save and review the results.

🌞 Solar Prompt Comparator
✅ Feature Overview
Left Panel: Uses the default system prompt (
You are a helpful assistant.
).Right Panel: Uses a custom prompt entered by the user.
→ Try assigning the model a specific persona or role!
Compare the model's response to the same question under two distinct prompt settings.
💻 Sample Code Snippet
custom_prompt = st.text_area("Write your own system prompt", "You are a helpful assistant.")
st.session_state.default_messages = [{"role": "system", "content": default_prompt}]
st.session_state.custom_messages = [{"role": "system", "content": custom_prompt}]
# Add a user question and output the response with two prompts simultaneously
You can check out the full code here: ⇒ [LINK]
📁 Chat History Export Feature
Download the left and right responses as a
.csv
file.Useful for saving conversation logs and conducting comparative analysis.
👉 Want to customize the demo to suit your needs?
You can fork it on Hugging Face and create your own Streamlit Space for editing and deployment!
Wrap-Up
In this section, you learned how to get started with the Solar Chat API.
🔹 A step-by-step walkthrough from issuing an API key → connecting to a Solar model → handling conversations
🔹 How to implement both single-turn and multi-turn chat flows
🔹 Overview of the different roles used in the Chat API and how to apply them effectively
🔹 A hands-on demo comparing the effects of different system prompts using sample code
YoungHoon Jeon | AI Edu | Upstage
Last updated