A New York Times-featured celebrity fashion fansite currently pioneering "Always Fresh Lite" (AFL), an adaptive Next.js rendering system that keeps content feeling fresh by orchestrating how humans, crawlers and AI agents experience the site—without heavy infrastructure, frequent updates, or SEO compromises. Built and designed end-to-end by Rito under RitoVision as an integrative effort spanning Product, Brand, UX and Technology (Full-stack and DevOps) with this document focused on an architectural overview.
Live Site - CarolineVreeland.com
The project began in 2018, was used in 2020 for a NYT article to lead fashion icon Caroline Vreeland's introduction, and has undergone several iterations with this Next.js-based repo being the third and latest version (v3.1). The full project case study can be found here and the AFL strategy showcased in this current iteration is detailed in the Always Fresh Lite (AFL) section.
- Tech Stack
- At-a-Glance
- Always Fresh Lite — Strategic Breakdown
- Getting Started
- Testing
- Error Boundaries
- Accessibility
- JSON-LD — Semantic Data & SEO
- Legal
- Learn More
Framework & Hosting
- Next.js (Pages Router) on Vercel (serverless)
Edge Computing
- Vercel CDN
- Next.js middleware
- Edge Config (KV store)
Runtime Validation
- Zod (env vars + API payloads)
API
- Next.js serverless functions
- OpenAPI 3.1 generated from Zod (zod-to-openapi)
- Swagger UI at
/api-docs
Freshness Orchestration (AFL)
- Client-side Media Randomization (CMR)
- Bot Parity Layer (BPL) with SSG/ISR
- Orchestration Layer of Rewrites (Edge middleware, client-side performance detector, SmartLink routing)
Cache & ISR Control
- Vercel on-demand revalidation API (invalidate + clear global cache + warm cache)
- Vercel Cron Job
UI & Motion
- MUI + Emotion
- Framer Motion
- Swiper
- yet-another-react-lightbox
Testing
- Vitest (unit/integration)
- Playwright (E2E)
- axe-core (accessibility)
CI/CD & Coverage
- GitHub Actions (test, deploy, API calls, cache warming, cron job priming)
- Codecov
This project showcases a unique, custom-built architecture to solve a common problem: keeping a static site feeling "fresh" without a CMS, database, or frequent updates.
-
Pioneering "Always Fresh Lite" (AFL): A custom adaptive rendering system in Next.js.
-
Solves a Core Problem: Delivers a dynamic, randomized user experience (CMR) on a static site...
-
...Without SEO Compromise: Uses a "Bot Parity Layer" (BPL) and "Orchestration Layer of Rewrites" (OLR) to serve fully-rendered, bot-optimized content to crawlers and AI agents.
-
End-to-End Ownership: Designed and built end-to-end by Rito, spanning Product, Brand, UX, and Full-Stack/DevOps engineering.
-
Enterprise-Grade Testing: Features a comprehensive test suite with 90%+ coverage, including Vitest (Unit/Integration) and Playwright (E2E) and Axe-core (a11y).
The UI involves significant customization through scoped CSS modules, robust hooks to orchestrate content randomness both client-side and server-rendered, and stylish placeholder skeletons with Higher Order Components to mitigate experiences under weak network connectivity.
In order to explain how Always Fresh Lite (AFL) works and why it exists, it's better to see its relevance in the context of the questions and constraints that give rise to it. In the following subsections, the line of reasoning will be laid out in the form of questions, constraints and answers to explain the strategy and implementation as well as why AFL may be preferable to other powerful alternatives.
The premise begins with this:
The site will NOT be actively updated with new content frequently.
Then the first question that follows is a UX question:
How do we get the user to feel like the site is actively updated even though it's really not?
or rephrased in the negative sense
"How do we get the user not to feel like the site is stale and outdated after the first visit?"
The most direct answer is: the user sees different content when they visit again.
The next and most important question is:
How?
And how to do it with these constraints:
- Keep resource expenditure and infrastructure overhead at a minimum.
- No frequent manual updating of content.
- Strong SEO support.
- Solid page performance without disruptive loading or layout shifts.
From here we get into more specific architectural questions to rule out certain approaches of how to achieve this:
- How do we do it without CMS infrastructure? Because we don't want the overhead or the cost.
- How do we do it without server-side rendering? Because we don't want the resource expenditures, operational complexity or latency.
- How do we do it so that it's "always fresh" and not just fresh every once in a while like Incremental Site Regeneration (ISR)?
- How do we do it without pre-building every possible permutation of a page with randomized components properties like A/B testing if it were treated as a feature? Because that's a very heavy build process.
- How do we do it to leverage the speed of CDN caching or comparable delivery time?
The answer to this "how" are two techniques used together: Client-side Randomized Media (CMR) and Bot Parity Layer (BPL)
Client-Side Media Randomization (CMR) — Pre-renders most of a page’s HTML, CSS, and JSON-LD via Static Site Generation (SSG), then uses browser JavaScript to select, fetch, and display randomized media (images, text variants) from an in-app media pool. CMR delivers different visual experiences on repeat visits with very low overhead and solid performance. The media pool is bundled inside the Next.js app (no external CMS or separate media bucket required), which simplifies deployment and keeps media rotation fast and local.
However, this technique by itself is incomplete due to a key limitation of CMR affecting SEO and AI agent retrieval. The initial HTML may not contain every media asset, they may be selected and fetched after JavaScript loads, which can create friction for some crawlers or AI agents that only fetch initial HTML. While many search engines execute JS and index content added client-side, relying solely on client-side randomization risks missed indexing or retrieval in environments that do not evaluate post-load JS.
Note: The content cycled must remain categorically consistent with the JSON-LD schemas annotating the page to preserve semantic accuracy and avoid mismatches between visible media and structured data.
The question then becomes:
How do we address this SEO/AI weakness of CMR without giving up the CMR benefit users experience?
The answer is:
Bot Parity Layer (BPL) — Provide parallel, bot-optimized page copies of CMR pages where media are pre-rendered at build time (SSG), with optional build time randomization and Incremental Static Regeneration (ISR) for periodic media rotation. BPL ensures search engines and AI crawlers receive fully pre-rendered, indexable HTML and media content without interfering with human users' fresh CMR experiences. They contain canonical links of the CMR pages and a parallel internal linking system to ensure client-side navigation remains within the BPL.
The next question to follow from BPL:
How do we get search engine crawlers and AI agents and other retrieval tools to use or even know about these non-canonical bot pages?
The answer is in the next section: an orchestration layer of rewrites
For every canonical page URL such as /about or /gallery there is a corresponding bot version /bot/gallery that is served under the same canonical URL (without the /bot prefix) and a set of vectors used to orchestrate how and when bot pages are delivered. There is no perfect way to know if an entity is a search engine crawler or an agentic tool, but based on known tendencies and some publicly available information the Orchestration Layer of Rewrites (OLR) can cover several vectors to manage the pages served to bots and minimize interference with human user experiences on the canonical pages. Each method assumes the bots navigate intentionally to a canonical indexable page such as /gallery.
1. Middleware — Addressing multiple vectors: (1) known User-Agents for bots and search engine crawlers such as "Googlebot"; (2) IP ranges sourced from Vercel Edge Config, hydrated from provider JSON feeds and static Claude ranges; (3) heuristic fallbacks for generic crawler keywords and scripted HTTP clients; and (4) manual overrides via query flag (?afl=bot|human). Whenever a request (e.g., /gallery) matches one of those vectors, the middleware issues a server-side rewrite to /bot/gallery while keeping the canonical URL visible. Direct visits to /bot/* receive a 301 back to canonical. Having duplicate variant pages specifically designed for search engine indexing is a valid and accepted practice, as long as it is close enough thematically to not be considered "cloaking".
2. PerformanceDetector — when a page is rendered in a browser, PerformanceDetector runs and checks the user's system specs for network type, CPUs, and memory. If the device falls below a certain threshold, it performs a client-side rewrite (router.replace) to fetch the BPL version while keeping the canonical URL in history. Bots and agents may conserve resources by using low-spec tools for lightweight fetching tasks so this becomes one vector to guide them to BPL pages. This mechanism also deliberately applies to human users with low-spec systems to give them a more performant site experience with less client-side rendering as a trade-off of them only seeing freshness upon ISR regens (if active).
3. SmartLink + History Bridge — Internal navigation uses a SmartLink wrapper that, when operating in bot mode, pushes the /bot/* route while presenting the canonical path to the browser. A custom _app hook rewrites history state on route changes so back/forward navigation continues to show canonical URLs even though the bot payload was fetched under the hood. This keeps navigation fluid, maintains canonical optics, and ensures bots (and low-spec users) that run JavaScript to navigate stay inside the BPL variant map once routed there. If they instead hard load each URL, sidestepping the Smart Link internal routing, the middleware will provide the rewrites again.
Quick Recap
So far we have established (3) layers for implementing AFL:
- CMR - user experience layer providing fresh content without operational overhead
- BPL - indexing / discoverability layer for bots (and humans with low-spec systems)
- OLR - rewrite-based routing intelligence to coordinate which visitors get CMR or BPL
Naturally, the remaining critical concerns cover the caching strategy, addressed in the next section.
How do we ensure a caching strategy that meets the following conditions below?
Conditions:
- All pages are cached to be served as quickly as possible to improve SEO and user experience with page loading
- Canonical CMR pages are never served stale after a new build / deployment so the latest versions are always served from cache.
- Caching timelines are sensible.
- Freshness is assured on a global basis, not a regional one.
And as a bonus, since we have extra content from CMR, how can we also give bots fresh content occasionally to improve SEO without compromising BPL benefits? (Search engines like seeing fresh content, and the answer lies in using ISR for BPL pages)
We then add these conditions to the caching strategy:
- ISR pages are never served stale after revalidation / regeneration or upon initial build/deployment (same as SSG) so bots always get the latest version.
- ISR is not so frequent as to basically be SSR.
- The sitemap is updated at the same frequency as the ISR pages to reflect the content changes.
Vercel is the chosen hosting provider and Vercel's Content Delivery Network (CDN) comes built-in to Next.js apps deployed on it, and it also supports ISR rebuilds via getStaticProps so we will meet the conditions based on this CDN's current rules of operation and ISR for the Pages Router.
Firstly, for all pages, both CMR and BPL share a common limitation. Upon new site (re)deployments on Vercel, the prior page cache is invalidated globally, however the cache for each page is only replenished after the next request, so the first requests for each page are MISSES and can result in latency that would be undesirable for SEO.
The question can be phrased as: How do we ensure caches are fresh globally after initial (re)deployments with Vercel's CDN?
Secondly, Next.js Page Router ISR has a key limitation in that upon invalidation for a page, the CDN serves stale content once before serving fresh content to the next hit in that region ONLY, the CDN cache is refreshed on a regional basis only, not a global one, so stale content may still be served in other areas. If a search engine receives stale content and indexes the same identical page as before, that would defeat any benefit of using ISR.
The question can be phrased as: How do we trigger page regeneration for the BPL and refresh their global cache without warming every single region individually? (Since each region would need to be warmed separately or else a stale page would still be served to the next hit in that region)
A reliable solution to address both issues starts with creating an API that uses Vercel's on demand invalidation at api/revalidation responsible for:
- invalidating the cache for specific or entire groups of pages globally and
- warming pages (hitting each page to refresh the cache globally after it's been invalidated)
In addressing caching issue for initial deployments, the GitHub Actions CI/CD workflow deploy-vercel.yml (used for deploying the site to Vercel) now hits a consolidated /api/cron endpoint once the deployment succeeds. That endpoint immediately clears the global CDN cache, re-generates the relevant ISR targets, and warms every page so the next visitor in any region receives a fresh response.
In addressing the second issue around ISR and caching, a dedicated maintenance flow keeps canonical + bot pages refreshed on an ongoing cadence without requiring per-region warm-ups:
- Edge Config powered bot detection — middleware and downstream logic read AI provider CIDRs from Vercel Edge Config; the payload merges static Claude ranges, deduped JSON feeds, and emits warnings when providers ship malformed data. When Edge Config is unavailable, cached documents or static ranges keep routing reliable. RFC-5737 test ranges are used automatically in test environments so detection logic can be exercised safely.
/api/ip— on-demand endpoint (and cron task dependency) that fetches the latest AI provider IP lists, merges them with static ranges, normalizes/dedupes the CIDRs, and writes the resulting document to Edge Config (or simply returns it whencommit=false)./api/revalidate— encapsulates targeted revalidation + optional warm logic used both directly and via cron./api/cron— orchestrates the two tasks above so a single call refreshes Edge Config IP data and performs ISR regeneration plus warming.vercel.jsonregisters a daily Vercel Cron (0 0 * * *); the handler consults an Edge Config-backed maintenance schedule (default interval: six days, override viaCRON_MAINTENANCE_INTERVAL_DAYS) before deciding whether work is due or should be skipped.- Successful runs batch-upsert the refreshed IP document and maintenance schedule in a single Edge Config PATCH, and respect
dryRun(no writes) orforce(run immediately) controls exposed by the API. - The deployment workflow also calls
/api/cronpost-release so the fresh build is served globally right away.
With an external source controlling the timing and execution of ISR, the next question is: How do we synchronize the sitemap to show time updates alongside the ISR updates
The sitemap behaves differently from normal pages and has its own rules:
- It is an XML file, not HTML.
- Latency of a sitemap is NOT a significant SEO factor, unlike how it is for latency of HTML pages.
- Building the sitemap is extremely lightweight given the relatively small number of pages and assets.
Goal: keep the sitemap’s <lastmod> aligned with ISR updates triggered by the Vercel Cron cadence so bots see timely change signals.
The Approach:
-
Server-side Rendered (SSR), no CDN cache
Serve the sitemap via an SSR/serverless route without CDN caching. This guarantees the file can reflect the latest known “mod time” immediately, and the build process and latency are negligible to operations and efficiency so using the CDN to cache it would be a burden without upside since it requires managing a workflow for caching, invalidating and warming it (like with the other pages). -
Build timestamp source of truth
During each deploy, run the prebuild scriptgenerate-build-time.tsthat writes aBUILD_TIMESTAMPto the auto-generatedlib\generated\buildTime.tsfile. The sitemap route reads this value to populate<lastmod>. -
Revalidation cadence config
AddSITEMAP_REVALIDATE_SECONDS(env var) that mirrors the cron cadence used by the Vercel maintenance loop for ISR priming (default ~6 days). The sitemap handler can compareDate.now()toBUILD_TIMESTAMPand every interval passed it will update the date accordingly. -
Cron alignment
When/api/cronruns (either immediately after deploy or on the scheduled six-day cadence), ISR pages are revalidated and warmed globally. Because the sitemap readsBUILD_TIMESTAMPfrom the latest deploy (or its bump logic), its<lastmod>stays in sync with these refresh cycles. Any slight skew between the deploy timestamp and the cron trigger is negligible for SEO—bots still encounter consistent freshness signals across both sitemap and ISR’d pages.
This repo prefers pnpm as a package manager but should be safe to use any other.
Clone the repo and install dependencies:
git clone https://github.com/ritovision/vreeland-fansite.git
cd vreeland-fansite
pnpm installRun the development server; it's open on all interfaces so your mobile device can access it if it's on the same network:
pnpm dev
# Open http://localhost:3000 in your browser to preview
# Or use your device's IP address on your mobile Build for production:
pnpm build
pnpm startThe repo uses .env.local (template in .env.example). Core values:
CRON_SECRET– shared auth token for/api/cron,/api/ip,/api/revalidate, and post-deploy hooks. Use the same value locally and in Vercel.EDGE_CONFIG_ID– the Edge Config identifier from the Vercel dashboard.VERCEL_ACCESS_TOKEN– a Vercel Personal Access Token with the Manage Edge Config scope (used for write access via REST).VERCEL_TEAM_ID(optional) – only needed when the Edge Config lives under a team account.
Edge Config setup workflow:
- Create the Edge Config in Vercel (
vercel edge-config createor via the UI) and copy the resulting ID. - Generate a Personal Access Token (Account Settings → Tokens) with the “Manage Edge Config” scope.
- Add
EDGE_CONFIG_ID,VERCEL_ACCESS_TOKEN, andCRON_SECRETto.env.localfor local work and to the Vercel project settings (Production + Preview). IncludeVERCEL_TEAM_IDif applicable. - Seed the document locally by running
pnpm edge-config:sync. This merges the hard-coded ranges with provider JSON feeds and writes the normalized payload to Edge Config. The scheduled cron job and/api/ipendpoint reuse the same logic in production.
The repo has a comprehensive testing suite of Vitest unit and integration tests with a minimum of 90% coverage across statement, branches, functions, and lines. End-to-end testing is covered extensively through playwright specs proving robustness across desktop and mobile Chrome and Firefox worker flows, along with axe-core for ensuring pages maximize accessibility.
For running Vitest (automatic coverage)
pnpm vitest:run-
High-fidelity API tests: Integrated next-test-api-route-handler with a lightweight request runner that exercises real Next.js semantics (method/headers/query/body, x-forwarded-* origin, res.revalidate). Surfaced edge cases (malformed JSON, multi-value query params, method/auth guards) before release.
-
Contract coverage: Added contract tests (Zod + generated OpenAPI) to validate request/response shapes and error enums in CI. Combined with the runner to verify both spec compliance and runtime behavior.
End‑to‑end (E2E) tests run with Playwright and cover all major UX primitives of the site: navigation, accessibility, hero/media behavior, galleries, footers/headers, mobile menu, fun facts, press pagination/filters, and video carousels. The suite is modular (component/feature‑centric) rather than page‑level “kitchen sink” flows because the site is largely stateless and components are loosely coupled. This architecture makes broad regressions unlikely; individual specs target each feature in isolation, while global specs (navigation & a11y) ensure baseline quality across pages. Environment variables are configurable via .env with a template provided in .env.example.
Scripts
# default CI/local run (2 workers)
pnpm e2e
# stability-first run (single worker)
pnpm e2e:stable
# core smoke: site navigation
pnpm e2e:nav
# core smoke: accessibility (axe-core)
pnpm e2e:a11y
# strict mode (treat console errors/warnings as failures via env toggles)
pnpm e2e:strict
# open the HTML report generated by the last run
pnpm e2e:report
Testing Browsers (Projects | 4 configs)
Configured in playwright.config.ts and controlled by env toggles:
-
chrome-desktop — Desktop Chrome @ 1280×800
-
chrome-mobile — Pixel 7 profile
-
firefox-desktop — Desktop Firefox @ 1280×800
-
firefox-mobile — Firefox @ 412×915 viewport (mobile layout)
All browsers are enabled by default, but they can be selectively disabled via .env configurations.
To disable one set of browser types, set a value below to false.
ENABLE_CHROME=false
ENABLE_FIREFOX=true
To disable specific browsers, set a value below to false.
ENABLE_CHROME_DESKTOP=false
ENABLE_CHROME_MOBILE=true
ENABLE_FIREFOX_DESKTOP=true
ENABLE_FIREFOX_MOBILE=true
Strict Mode for Warnings and Errors
By default, tests do not fail on console.error or console.warn to reduce flakiness from benign issues; only uncaught page errors (pageerror) are fatal. The strict mode toggles are opt-in, when enabled, they promote console.error and/or console.warn to failures. Enforcement is implemented in helpers/console.ts and applied automatically via helpers/nav.ts.
strict_mode_errors=true
strict_mode_warning=true
By default we always fail on uncaught page errors (pageerror). The strict toggles above are opt‑in to keep benign third‑party noise from causing false failures. They are wired via helpers/console.ts and used transparently by helpers/nav.ts.
Reliability & artifacts
Playwright is configured for practical observability:
Retries: retries: 1 — flaky failures get one automatic retry.
Trace: trace: on-first-retry — a full trace is captured on the retry attempt.
Screenshots: screenshot: only-on-failure globally. Many specs also attach step screenshots intentionally (e.g., random image comparisons, carousel states) via helpers/report.ts.
Randomization proof — image randomization tests attach a before/after pair and compare currentSrc values so changes are demonstrated, not guessed.
Video: video: retain-on-failure globally; some specs (e.g., video.spec.ts) opt‑in to always record during the test to capture nuanced UI timing.
Repro clarity — carousels and swipe/scroll interactions are easier to debug with video + trace when timing differs across browsers.
Helpers & flows (test architecture)
Helpers (playwright/helpers/*) are low‑level building blocks (e.g., images.ts for reliably detecting image loads and src changes, nav.ts for robust navigation + console capture + page screenshots, videoSwiper.ts for carousel interactions, accordion.ts, tabs.ts, etc.).
Flows (playwright/flows/*) are higher‑level use‑case macros that stitch helpers into common scenarios (e.g., mobile-menu.ts, gallery.ts, picnic-canvas.ts). When a component is fully self‑contained, a spec may inline the flow for clarity.
Playwright validates the revalidation endpoint (auth, targets, optional warming).
Env
CRON_SECRET– required; sent asX-Auth.BASE_URL– target host (defaulthttp://localhost:3000).RUN_REVALIDATE=1– opt-in gate to run this spec.REVALIDATE_PROJECT– optional: limit to one browser project.
Run
RUN_REVALIDATE=1 pnpm playwright test playwright/e2e/revalidate.worker.spec.ts
Dry-run (server-controlled)
- Flag:
SERVER_ENV.API_DRY_RUN(on the API deployment). - Behavior: skips
res.revalidate()(no ISR invalidation) but still does warmingGETs against the same host you hit. - Use: only for safe preview/prod smoke checks. Keep off in real production.
Spec/fixtures
playwright/e2e/revalidate.worker.spec.tsplaywright/fixtures/revalidate.ts(helpers + auth header)
Quick curl
curl -X POST "$BASE_URL/api/revalidate?warm=true" \
-H "X-Auth: $CRON_SECRET" -H "Content-Type: application/json" \
-d '{"target":"canonical"}'
This app uses a three-tier error handling strategy with react-error-boundary to contain failures and keep the UI usable. A small factory/wrapper (withBoundary(Fallback)) produces standardized components: FeatureBoundary (component/section), PageBoundary (route), and RootBoundary (app); each sharing consistent logging and optional resetKeys.
Each tier renders a tailored fallback:
- Feature fallback offers Retry to re-render a broken section without disrupting the page.
- Page fallback offers Try Again and Go Home to preserve navigation.
- Root fallback shows a full-page overlay with a Reload action.
In development, fallbacks include the error message and stack; in production they switch to friendly copy. A custom Next.js _error.tsx replaces the default 404/500 to keep error surfaces visually consistent with the rest of the site.
A dedicated, guarded test page at /test-errors (available in dev or when NEXT_PUBLIC_ENABLE_TEST_PAGE=true) can trigger ?error=root|page|component to exercise each tier, and includes an async error demo to illustrate what boundaries cannot catch (errors thrown outside React’s render cycle). The page auto-redirects to / when disabled to avoid exposure in live production.
Result: component-level failures are isolated (the rest of the page continues working), route-level failures remain navigable, and catastrophic errors degrade to a controlled, branded UI.
Making the site easy to consume for a variety of audiences is a priority and as such Best Practices such as ARIA support are given first class consideration and axe-core testing is used to ensure the grounds are well covered. Thank you mui, radix yet-another-react-lightbox for providing built-in ARIA support for some components used.
This site uses JSON-LD to improve search visibility, SERP features, and machine-readable signals for search engines and other consumers (including AI). Two helpers in lib/jsonld/ coordinate how JSON-LD is collected and injected:
-
getJsonLd.ts— a build-time/server helper that scans thedatadirectory for.jsonfiles, parses them, and returns an array of JSON-LD objects. It looks for site-wide objects underdata/global/and page-specific objects underdata/<page>/. Invalid or empty files are logged and skipped, so keep each file as a single valid JSON object (not an array) and use clear filenames like01-organization.jsonor01-article.jsonfor ordering and clarity. -
loadJsonLdScripts.tsx— a lightweight React helper that takes an array of JSON-LD objects and renders them as<script type="application/ld+json">elements usingdangerouslySetInnerHTML. Use this in your page or layout component to inject structured data into the rendered HTML where appropriate.
How to use:
- Place site-wide objects in
data/global/*.json(e.g., Organization, WebSite, Logo). - Place page-specific objects in
data/<page>/*.json(e.g., Article, Person, ImageObject). - Call
getJsonLd('<page>')from your page-level code (e.g.,getStaticPropsor a server component) to collect the array of objects. - Pass that array to
loadJsonLdScripts(...)in the rendered output so each object is emitted as a separate<script type="application/ld+json">tag.
Where to check & validate:
- Inspect the running page (run
pnpm devor view the production URL) and look for<script type="application/ld+json">tags in the page source or the Elements inspector. - Validate the page's structured data using the Schema.org Validator at https://validator.schema.org (paste the page URL or the extracted JSON-LD snippet). This is the recommended quick-check for correctness and conformance.
Rito credits his Semantic Web initiative Web3LD for providing support in setting up the JSON-LD infrastructure on this site.
This site is not run for profit, does not collect or sell user data, does not sell ads nor merchandise or engage in other forms of revenue generation. It is run for infotaining purposes cataloging the career of a public figure, Caroline Vreeland.
This repo's source code is open source under an Apache 2.0 license, however, any branding, trademarks and content including videos, images, audio are NOT included in that license.
To see the full case study and breakdown of this project including architecture, features, and design decisions, visit: ritovision.com/projects/fansite