🔵 Читать на русском (Read in Russian)
gpt_scientist is a lightweight Python library for processing tabular data stored in Google Sheets (or CSV files) using OpenAI models (like GPT-4o, GPT-4o-mini, etc).
The library is designed primarily for social science researchers and other users without extensive programming experience who wants to run AI-based textual analysis over tabular data with just a few lines of Python code.
The library is best used in Google Colab for processing Google Sheets. However, it can also be used locally with CSV files.
Feedback and Collaboration
If you use gpt_scientist for your project, we would love to hear about it!
Your feedback helps us improve the library and better support real-world research and activist work.
Feel free to open an issue on GitHub, or reach out by email.
pip install gpt-scientistfrom gpt_scientist import Scientist
# Create a Scientist
sc = Scientist(api_key='YOUR_OPENAI_API_KEY')
# (or set it via the OPENAI_API_KEY environment variable)
# Set the system prompt that describes the general capabilities you need:
sc.set_system_prompt("You are an assistant helping to analyze customer reviews.")
# Or, if the system prompt is long (e.g. contains the theoretical frame of your research study), you can load it from a google doc:
# sc.load_system_prompt_from_google_doc('your-google-doc-id')
# Define the task prompt
prompt = "Analyze the review and provide the overall sentiment from 1 (very negative) to 5 (very positive), together with a short explanation."
# Analyze a Google Sheet
sc.analyze_google_sheet(
sheet_key='your-google-sheet-key', # you can use the full URL or just the part between /d/ and the next /
prompt=prompt,
input_fields=['review_text'],
output_fields=['sentiment', 'explanation'],
rows='2:12', # optional: analyze only rows 2 to 12 in the sheet
)This will:
- Read the first worksheet from your Google Sheet
- Create the
sentimentandexplanationcolumns in that sheet if they don't exist - For each row in the specified range (2 to 12):
- Read the content of the
review_textcolumn - Call the OpenAI model with the prompt and the review text
- Write the results (sentiment and explanation) back into the sheet
- Read the content of the
Important: Google Sheets can only be accessed from Google Colab, so you need to run this code in a Colab notebook. To use the library locally with CSV files, call
sc.analyze_csv(...)instead ofsc.analyze_google_sheet(..)(see example).
Notes
- The library will write to the sheet as it goes, so even if you stop the execution, you will have the results for the rows that were already processed.
- The library processes multiple rows in parallel (by default, 100 at a time). This makes the processing much faster, but also don't be surprised if the output cells are filled in out of order.
- The library will also show you the cost of the API calls so far, so you can keep track of your spending (only for those models whose price it knows).
- If the output columns already exist, the library will skip those rows where the outputs are already filled in (unless you specify
overwrite=True).
Document Processing
Often, you may want to analyze not just short text fields, but longer documents — for example, interview transcripts. If your input cell in the Google Sheet contains a link to a Google Doc, the library will automatically open the document and feed its full content to the language model.
Important: If Google Sheets automatically converted your link into a "smart chip" (those clickable document previews), the library will not recognize it. You must ensure the spreadsheet cell contains a plain hyperlink, not a chip.
Quote Verification
One of the useful applications of GPT-based analysis is extracting quotes on specific topics from documents. The library includes a helper function to verify extracted quotes against the original source text.
You can call this after your call to analyze_google_sheet:
sc.check_quotes_google_sheet(
sheet_key='your-google-sheet-key',
input_fields=['transcript'],
output_field='gpt_extracted_quote',
rows='4:5' # optional: specify which rows to process
)This function will:
- Create a new column 'gpt_extracted_quote_verified' (if it doesn't exist yet).
- For each row, search for the extracted quote in any of the input fields (in this case, in the transcript).
- If it finds an exact or approximate match, it will write the exact version of the quote into 'gpt_extracted_quote_verified'.
- Otherwise, it will insert 'QUOTE NOT FOUND'.
This helps verify that the quotes generated by the model actually correspond to the original document, improving the reliability of automated extraction.
Select a different worksheet
If your input data is not on the first sheet, add worksheet_index=n (e.g. worksheet_index=1) to the parameters of your analyze_google_sheet.
Indexing starts from 0, so 1 is the second sheet.
Change the model
The default model is gpt-4o-mini: it is cheap and good enough for most tasks.
But you can use any model that is enabled for your OpenAI API key.
Just make this call before your call to analyze_google_sheet:
sc.set_model('gpt-4o')Write results to a new sheet
If you don't want to modify the input sheet, add in_place=False to the parameters of your analyze_google_sheet. This will create a new worksheet for the output.
Load a system prompt from a file
sc.load_system_prompt_from_file('content/system.txt')Control parallel processing
Depending on your OpenAI account tier and the model you are using, you might be limited in how many requests you can make per minute. You can control the number of parallel requests with:
sc.set_parallel_rows(10)The default is 100.
Set model parameters
sc.set_model_params({
'top_p': 0.5,
'max_completion_tokens': 100
})Pass any parameters supported by the OpenAI API. Common options:
top_p: Controls diversity via nucleus sampling (0.0-1.0). Lower values make responses more deterministic.max_completion_tokens: Limits response length. Protects from excessive costs if the model generates very long outputs.temperature: Controls randomness (0.0-2.0).reasoning_effort: For reasoning models ('none', 'low', 'medium', 'high').
Adjust retries and batch sizes
sc.set_num_results(5)
sc.set_num_retries(20)set_num_retriescontrols how many times the library retries after a bad response (default: 10).set_num_resultscontrols how many completions are requested at once — useful if input size is much bigger than output size, and the reponses are often bad.
Customize token pricing
sc.set_pricing({'gpt-3.5-turbo': {'input': 1.5, 'output': 2}})If you are using a model not included in the built-in pricing table, or if token prices have changed, you can define your own (in dollars per million tokens)
This library has been created as a result of my collaboration with the Hannah Arendt Research Center, and the idea is due to the Center's founder, Mariia Vasilevskaia.