Support for the web-vitals framework #161
Replies: 12 comments 7 replies
-
Given Google's May 2020 announcement that "Core Web Vitals" will be part of calculating search engine result rank "sometime in 2021", it would be great if Plausible could help web site operators with more info: https://webmasters.googleblog.com/2020/05/evaluating-page-experience.html (It's not 100% clear to me that any of these new "page experience" ranking factors are measurable without the long term data Google may be collecting from users.) |
Beta Was this translation helpful? Give feedback.
-
Another link that may be helpful: https://nextjs.org/blog/next-9-4#integrated-web-vitals-reporting You could use NextJS to set up a demo quickly and generate vitals to send to Plausible. It would be interesting to see:
|
Beta Was this translation helpful? Give feedback.
-
Got three more votes for web vitals tracking over the last 24 hours or so. Google has announced that web vitals will become a part of their ranking algorithm from May this year. |
Beta Was this translation helpful? Give feedback.
-
I'd love to get some context in what exactly the implementation of this should look like. Should it be how Google Analytics does it such that the dashboard has the capability to show it over time, with data received over an API, but it doesn't gather it itself? Or should we be providing a script for people to gather the data themselves that automatically sends the result? Or should Plausible be running a Cron to calculate it for every site that has it enabled? I can see significant drawbacks with every solution, from the lack of support, to the fact that only Chromium browsers can use most of the Vitals libraries, or that a Cron would be very heavy and would strain the infrastructure heavily. |
Beta Was this translation helpful? Give feedback.
-
Google's announcement in November 2020 that web vitals will be a new ranking factor in May 2021: https://developers.google.com/search/blog/2020/11/timing-for-page-experience My understanding is that two of the elements - Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS) - can be "lab-measured", while the third, First Input Delay (FID), relies on data collected by Google. However, Total Blocking Time (TBT) is seen as a lab-approximation of FID. That is how the WebPageTest folks have implemented Core Web Vitals measurements: https://webpagetest.org (more at https://www.webpagetest.org/forums/showthread.php?tid=16122 ) |
Beta Was this translation helpful? Give feedback.
-
Web-vitals library is a "tiny" 1kb for Google standards but that would more than double our script if included so that's out of the question as a default and at best could be an optional feature only. This is a Google initiative and Core Web Vitals is now a report in Google's Search Console. We already have a Search Console integration and many of our users have it enabled that so I wonder if there's a way to pull the Web Vitals data from there at the same time as the search keyword? And then present the key data in an interesting new report as part of our dashboard. It would be cool if we focus this new report more on speed and page weight in general rather than on web vitals alone: Data transfer/CO2 emissions and whether the server is powered by renewable energy or not. |
Beta Was this translation helpful? Give feedback.
-
I haven't looked at it super deep yet. The big argument seems to be whether to use lab data or field data. It would be fairly natural to just measure web vitals on each tracking request and send results back to Plausible. It would fit neatly into the current product. However, like @metmarkosaric says it would double our script size and ironically add load time to the page that's trying to reduce it. To add to that, my understanding is that Chrome is already capturing field data for web vitals and it can be accessed publicly. This leads me to an idea I've had: there could be a lot of value in crawling our customers' sites periodically. We could capture a bunch of data while crawling the site: web vitals, page weight, broken links, accessibility issues, incorrect HTML syntax, on-page SEO reports etc. Basically what in programming is known as static analysis. There are lots of tools for these things but few that you can set up to monitor your site over time. This could even be a standalone product from analytics. These static metrics don't need to be measured a million times a day if the page gets a million hits. It's enough to crawl once per day and save the results in our database. The only concern with that is what @Vigasaurus brought up
I think it could be an issue if it's not well-designed. Our cloud instance has over 10k sites, if each site has 10 pages on average we'll be crawling 100,000 pages per day. Not crazy but not trivial either. I think there's a ton of prior art in making crawling efficient. If it ends up being too heavy we can make it an optional (paid) extra in our cloud version. When you're self-hosting, you're likely using Plausible on a handful of sites anyways, so crawling wouldn't be prohibitively expensive. |
Beta Was this translation helpful? Give feedback.
-
I have been reading/experimenting a lot with web vitals lately and I would offer a different approach. Since lab results (for us at least) are not very stable and differ from actual data from clients, we only use them very sparingly. I think the easiest way to get started would be to add a numerical metric event similar to the goal conversion event. That way, every developer can track their own metrics and decide whether they want to use the web vital framework or lab data. |
Beta Was this translation helpful? Give feedback.
-
A competing tool Panelbear offers performance monitoring which splits metrics into Frontend, Backend, and Network categories. From what I can tell, it leverages the Navigation Timing API which is still experimental. In addition to measuring Core Web Vitals, I think it'd be helpful to have visibility on the Backend and Network metrics as well. |
Beta Was this translation helpful? Give feedback.
-
Also got interested in it. I use NextJS and it have Data example:
Could this be sent to plausible using custom props in Plausible custom goals [2]? [1] https://nextjs.org/docs/advanced-features/measuring-performance |
Beta Was this translation helpful? Give feedback.
-
Being such an important factor for user experience and SEO, I think the lack of support for it might be stopping people from moving from GAnalytics to Plausible and it would be great if it can get added. I also use Next.js. |
Beta Was this translation helpful? Give feedback.
-
I'm don't believe search console data imports or page speed tools would add any real value in the context of web vitals. For navel gazing, they're fine, but when you get a warning your LCP slipped above 2.5s, what's needed is real field data that can be filtered. An average of the last month of data in aggregate (search console) is not remotely actionable. Neither is a tool like PageSpeed Insight's instant benchmark that shows an LCP of 0.5s while those in Brazil or with Chromebooks are actually at 4s. Ideally, Plausible would have an event endpoint that could accept and graph data to plot from the optional, 1KB web-vitals library. Then, we'd apply filters just like Plausible's dashboard to instantly find the source of our problems. Is it tablets, mobile, a specific country, an OS, some combination thereof? This is Google's version that allows segmenting from Analytics data: https://web-vitals-report.web.app/ |
Beta Was this translation helpful? Give feedback.
-
"Not sure if it’s something that fits in with Plausible but it would be a nice addition to have a Web Vitals card in the dashboard that shows you stats (maybe over time?) about your web-vitality stats.
Link to framework: https://github.com/GoogleChrome/web-vitals/"
(imported from the old roadmap)
Beta Was this translation helpful? Give feedback.
All reactions