I’m evaluating xk6-browser to get the web performance metrics like FCP, page load etc and evaluate the application for performance decreases against new code.
import launcher from "k6/x/browser";
function login(page){
page.goto('https://app/login');
//login to application
page.$('input[name="username"]').type('sashika.wijesinghe');
page.$('input[name="password"]').type('welcome');
page.$('a[id="loginSubmit"]').click();
// Wait for next page to load
page.waitForLoadState('networkidle');
page.waitForNavigation();
// Wait for a selector in the very first page after login to make sure it loaded
page.waitForSelector('div[class="newsfeed-item-container"]');
}
export default function() {
const browser = launcher.launch('chromium', { headless: false });
const context = browser.newContext();
const page = context.newPage();
// This is for my authentication
login(page);
// This is the actual page that I want to ge the metrics
page.goto('https://app/marketplace');
page.waitForNavigation();
// this is to wait until the required page loads
page.waitForSelector('ul[data-testid="carousel-container"]');
page.close();
browser.close();
}
My questions are:
(1) When I run the code I’m getting some metrics, but I’m not sure how that metrics are collected?
(2) Does it only include the web performance metrics for the page that I actually want to verify ( which is /marketplace) or is that an average metrics of all the pages that I go though the process ( like /login, /mainpage, /marketplace)
Metrics are emitted by xk6-browser, and collected and processed by any k6 output you have enabled, so they work the same way as with plain k6. Let me know if you have a specific question about this.
The end-of-test summary shows an aggregated view of all metric data. But you can see individual metric samples by enabling a k6 output.
For example, if you enable the JSON output with xk6-browser run --out json=result.json script.js, you’ll see all raw metric data in the created result.json file. There you can see DOM related metrics for any loaded pages, and HTTP related metrics for any URLs loaded by the page, such as static assets. There will be no averages here, just raw metric samples for each URL.
So to answer your question, you’ll see metric data separately for all pages loaded in your script (/login, /marketplace, etc.).
Thank you for your answer. I got the json output and analyze how the summary value os calculated. I noticed from the log there are some record without a program url
In my scenario, there were 3 records with the URL’s I mentioned, and some other URL’s that are empty as above. so the summary calculation is made considering all the instances ( 3 URL instances + 4 empty instances , divides by 7). So the summary doesnt actually reflect what I’m looking for.
Is it possible to set the thresholds for results.json (Thresholds) file?
For example: I want to check the browser_dom_content_loaded for 'https://xxx/catalog' is above some given value
I noticed from the log there are some record without a program url
That’s certainly unusual and shouldn’t happen. I was able to reproduce this with a more complex script we have, so I created issue #381 to track this. Feel free to subscribe to it for updates.
In any case, the metric value and summary calculation should be correct. This seems like a bug with getting the right frame URL, so you’ll have difficulty correlating it, but the aggregated metric values should be fine.
If you can share a runnable script that reproduces the issue, we can take a closer look.
And like Tom says, thresholds should work in the same way as they do in k6.