Skip to content

darius-soh/geekinmyA

Repository files navigation

Sure Bo?

A multilingual news credibility and misinformation awareness platform that helps users evaluate articles, claims, and online content more critically.

Overview

Sure Bo? is designed to help users navigate the growing problem of misinformation online. Instead of acting as a black-box detector that simply labels content as true or false, the platform aims to make credibility checking more understandable, accessible, and educational.

Users can search for a claim, article, or news topic, view a credibility assessment, translate content into their preferred language, and learn how to identify misleading information for themselves through a built-in fake news guide.

Problem Statement

Misinformation spreads quickly online, especially through headlines, reposts, screenshots, and out-of-context claims. Many users do not have the time, confidence, or media literacy skills to verify every piece of information they encounter.

This problem is made worse by two key issues:

  1. Language barriers
    Not everyone consumes information in English. Many users are more comfortable searching, reading, and verifying information in their own language.

  2. Over-reliance on tools
    Many credibility tools give users a score or result, but do not help them understand why a piece of content may be misleading, or how to verify it independently in the future.

Our Solution

Sure Bo? addresses both of these issues by combining:

  • AI-assisted credibility analysis
  • Multilingual search and translation
  • An educational guide page for spotting fake news

The platform is designed not only to help users assess information in the moment, but also to improve their long-term digital literacy.

Key Features

1. Credibility analysis

Users can input a claim, article, or news link and receive a credibility assessment based on the platform’s analysis pipeline.

The result is presented in a simple and understandable format, including:

  • credibility assessment
  • confidence level
  • evidence signal strength
  • a summarized explanation of the result

This makes the output easier for everyday users to understand instead of overwhelming them with technical details.

2. Multilingual support

The platform supports multilingual usage so that users are not restricted to English when checking information online.

Users can:

  • search in their own language
  • interact with the platform more comfortably in their preferred language
  • access credibility-related content in a way that is more inclusive and accessible

This is especially important for diverse communities where misinformation may spread across multiple languages.

3. Search in your own language

Users can enter claims, keywords, or article-related queries in their preferred language rather than being forced to translate them manually into English first.

This improves:

  • accessibility
  • user convenience
  • relevance for multilingual communities
  • adoption among users who may otherwise avoid fact-checking tools due to language barriers

4. Page translation

The platform allows users to translate pages and content into a language they are more comfortable with.

This helps users:

  • understand articles written in other languages
  • compare reports across sources from different regions
  • verify claims more confidently without relying only on machine summaries

By reducing language friction, the platform makes cross-checking information much easier.

5. Fake news guide page

The platform includes a dedicated educational guide page that teaches users how to identify misinformation on their own.

This is an important part of the project because the goal is not to make users dependent on a tool forever. Instead, we want to help users build the habits and critical thinking skills needed to spot suspicious content independently.

The guide page teaches users how to:

  • check the original source
  • compare multiple sources
  • identify emotionally manipulative headlines
  • spot suspicious website URLs
  • distinguish reporting from opinion
  • recognize missing context
  • notice when there is weak or missing evidence
  • slow down before sharing unverified content

This makes the platform both a credibility-checking tool and a media literacy resource.

6. User-first design

The platform is designed to present credibility information in a simple and approachable way. Rather than relying on opaque scoring alone, the interface is built to help users understand what the result means and why it was given.

Why This Project Matters

Misinformation is not just a technical problem. It is also a human and educational problem.

A good solution should not only detect suspicious content, but also:

  • make verification accessible to more people
  • work across languages
  • help users understand the reasoning behind the analysis
  • teach users how to think critically without needing to rely on tools all the time

[Project Name] is built with that philosophy in mind.

How It Works

  1. A user enters a claim, keyword, or article link.
  2. The platform analyzes the content and produces a credibility assessment.
  3. The user can view the explanation, score, and supporting signals.
  4. If needed, the user can translate the page or search in another language.
  5. The user can also visit the guide page to learn how to identify fake news independently.

Target Users

This platform is useful for:

  • students
  • everyday social media users
  • multilingual communities
  • people who want quick help verifying online claims
  • users who want to improve their media literacy

Tech Stack

Replace this section with your final stack if needed.

  • Frontend: [React / Next.js / Vite]
  • Styling: [Tailwind CSS / shadcn/ui / CSS]
  • Deployment: Vercel
  • AI / analysis layer: [OpenAI API or your chosen model/API]
  • Database / source storage: [Pinecone / JSON / Supabase / other]
  • Translation / multilingual support: [your implementation]

Future Improvements

Possible future enhancements include:

  • stronger evidence retrieval from trusted sources
  • better source comparison features
  • expanded multilingual support
  • improved explainability for credibility decisions
  • personalized recommendations based on user interests
  • deeper educational content in the fake news guide

Impact

The long-term goal of [Project Name] is to help users become more informed, more critical, and less vulnerable to misinformation.

Rather than replacing human judgment, the platform supports it.

Installation

Edit these steps based on your actual project setup.

git clone [your-repository-link]
cd [your-project-folder]
npm install
npm run dev

About

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors