How do I generate a summary report for each scenario at once on K6 tool? - javascript

I could run some performance tests successfully with K6. However, I've been trying to generate a single summary report for each of the 4 scenarios at once, but I couldn't. The workaround is to keep a single scenario (comment the others or remove them), run the test and generate the summary report. Then, exchange the scenarios and repeat the previous steps.
Is there any approach that I could follow to generate 4 summary reports, each one for each scenario with a single run? I did that, but I got a single summary report without split the number for each scenario.

Unfortunately this is not easily possible right now.
One creative solution to avoid manually commenting out and rerunning the script is to use an environment variable to conditionally enable certain scenarios. Take a look at this example on the forum.
The summary report is just the result of some handy calculations based on the test's metrics, but if you don't mind calculating those yourself all metrics have a default "scenario" tag, so you could filter the metrics per scenario in whatever output system or processing tool you wish to use. For example, you could do the calculations with jq if you export the results to JSON, or in a Grafana dashboard using InfluxQL, etc.
You also might be interested in recent changes to the summary report (tentatively landing in the upcoming v0.30.0), which will make generating the report much more flexible. Separating it per scenario isn't currently planned, but feel free to propose the feature in a GitHub issue and we can discuss it there (disclaimer: I'm one of the maintainers).

Related

What problem is React Suspense trying to solve?

I've seen some examples in reactjs.org, but I want to know the problem that they are trying to solve and/or what's the magic behind it. And how can I use it in real projects aside from what's already in the react docs.
There are two use cases for React Suspense that I know of (and pretty sure more to be discovered). Please note, in the answer below, I am using suspense as a pragmatic reference, in reality there are more components which are used, such as lazy, react-cache etc.
#1 Make it easier to obtain lower Time To Interactive
A lower time to interactive a.k.a TTI metric is a way to measure how fast your website feels to a user. If you inspect your network resources in your browsers dev tools, you will see that a very significant time is spent in waiting to download the javascript file. Even if it is minified and compressed, it might not be optimal.
For e.x. if your website at some time requires a data visualisation library (say Highcharts), if it is not the first thing that your user would see, you don't need to send that visualization component with the first javascript file. That will save a lot of size from your initial bundle and improve your TTI metric.
This is done by combined magic of webpack dynamic imports, react Lazy and React Suspense (that is what the docs point to)
#2 Handle the usual cases around data fetching
This I think is still a work in progress, but is something I remember react team is working on. If your component needs to get its data from a server (API call), then you will see some common concerns and you will try to handle them in some capacity:
Show loading indicator if request is taking long
What if your request errors out (Error boundaries do that for you now)
What if you would like to cache your costly network requests
These are common concerns and that is where suspense will help out.
Additional Resources which might be of interest
Dan Abramovs talk at jsConf introducing suspense to the world.
A nice post on medium showing the benefit of code splitting and impact on TTI

How do I scrape constantly updated JavaScript post-login using Python?

I know there are many similar questions, however, they are all piecemeal to the problem I have, and I haven't been successful in putting the information together.
I am using a FLIR ax8 thermal camera, and this camera has a web-interface that one can interact with via ethernet. Long story short, temperature values are constantly displayed and updated, and I would like to scrape those values. I would like to do this without opening a browser with a GUI, and just be able to call every so often to get them.
The first step is a simple login page, located at "cameraIP"/login. It's very basic, but I need a solution that gets me through this, and be able to maintain the login session. Then it's just the interface. Attached are two images, the first showing interface as seen in Chrome, and the second a terminal output of what I scraped using Python's Requests module.
As you can see, the numbers are clearly not there, as they are rendered by JavaScript. This is essentially all I have to work with. If someone could give advice on how this is possible to get those temperature values every so often, that would be great.
If there are ANY questions, just leave a comment down below and I can provide more information, such as the JS files listed under the web interface if they are needed.
I personally use scrapy splash to render the javascript when scraping using scrapy: http://splash.readthedocs.io/en/stable/

Javascript - Programmatically batch print HTML documents

tl;dr I'm looking for a good way of batch printing database-stored HTML documents from javascript
Our users generate rich text content via an open source WYSIWYG javascript-based text editor (CKEditor). The HTML content is saved to our database and can be printed directly from the editor via its inbuilt print functionality (basically just window.print()) . This is great and works perfectly.
Now, we have a need to batch print saved documents and I am looking for workable solutions. There are various options that I can see, but all have large tradeoffs:
User selects documents to print. JS code loops through documents and calls print one-by-one. The issue here is that the user will see a bunch of print dialogs. Which is painful. (Aside: we are using Chrome but I do not have the option of setting it to kiosk mode)
User selects documents to print. JS code combines all of these in a single (hidden) container and they are all printed as one 'document'. These can be fairly big documents with tables, images, etc. I worry about the performance involved here as we could be adding a significant amount to the DOM.
Similar to #2 above, but at some point the documents are converted and saved to a single PDF. This would be OK, but there don't seem to be many good/cost-effective options for converting HTML to PDF.
Generate some kind of a report that can handle HTML content. I took a look at SQL Server reporting services but it supports a very limited set of HTML tags and CSS properties.
Is there a better way to batch print HTML content from javascript? Any help is very much appreciated!
Edit As per #Paul, I need to clarify a few points:
The content is that which is created in your standard online text editor. In my case:
No iframes
No animations
No dynamic content
Now, were I to print straight from the editor a print stylesheet would be applied, so this may complicate things a bit.
Since content could be potentially large and consume a lot of memory I would do this on server side. Select docs on client and request server to render those to PDFs e.g. utilising PhantomJS. This would then allow you to even use mobile clients to fetch PDFs.
I completely agree with the answer above, PhantomJS would probably be the best option. The only problem with this is in terms of reliability PhantomJS has been pretty touch and go over the last few versions. If the size of the documents become too large it may become too much for Phantom to handle (remember it was originally designed for web testing purposes, and morphed into web automation). When writing the script for this, I would suggest following the outline below (to break up the processes into more manageable steps)
var steps = [
function() {
// step 1
},
function() {
// step 2
}
]
Again, it's not a perfect option overall, but it is the best one we have to work with for now. If you have any questions feel free to reach out, I'm working on web automation myself so this will all be fresh in my mind.
Download for PhantomJS Here

Linking a tracked Event with Goal conversions or Ecommerce transactions in Google Analytics

I have a goal conversion set up that tracks hits on my site's receipt page, basically to know how many people made successful transactions. I've also set up ecommerce tracking for this page so I have both goals and ecommerce.
One of the ways the site boosts sales is through a small interactive widget that guides users to which product best fits them. I've set up events tracking for that widget to track user click-throughs there.
Now I want to link the 2 together and find out how many goal conversions/ecommerce transactions passed through the interactive widget. Basically, I want to track how many conversions I got from that widget click event.
Is there a built-in way to do this using GA without touching application code?
Jensen, I am afraid events are not the best option to analyze this.
The reason for that is that you are dealing with various scopes (goal/e-com transaction have a scope of session, whereas events of page). See this brilliant article by Avinash.
What this all means is that if you look at the report Jensen suggests, you might get wrong assumptions. What you will see is that if a visit ended up with $2,000 transaction and events A, B and C were triggered, then all of them will be assigned $2,000 "value".
That's the reason why the total e-commerce revenue will be bloated in this report if you compared it to the numbers in the dedicated e-commerce reports. See this screenshot (sorry for blurring some parts):
Much better way to do this is creating various segments for visits/users and then comparing the overall performance (conversion rate, number of products bought, average sale amount etc.).
Those segments might be tricky as well (I don't know the specific details about your website), but they might actually give you some answers.
Few suggestions:
visits that used the up-sell features and ended-up converting
visits that used the up-sell features and ended-up NOT converting
And various other combinations, the new segment builder is quite powerful (even allows you to use consequent steps). It will take some time, and make sure that every single time before diving into the reports and numbers, you always have hypothesis that you are trying to validate.
In other words -- think twice, analyze once :-). Hope this helps!
Well this is a bit embarassing. I was looking around the GA views trying to see how I can do this and I found the answer myself.
It's very simple, just go to Behavior > Events > Top Events, then on the Explorer section, click on "Ecommerce". This will show a view that includes the top events and their transaction/ecommerce details.

I need to code a dynamic report builder in ASP.NET, where should I start?

I've been tasked with creating a dynamic report builder to extend our current product that should allow our users to configure with relative ease a useful report drawing data from what they've inputted into the system. Currently we customize these reports manually, and this process involves a developer (me) taking the requirements of the report (fields, aggregate totals, percentages, etc) and publishing the results as a relatively interactive page that allows for the ability to 'drill down' for more information in the record rows, etc.
The reports are not extremely complicated, but they're involved enough that programmatically generating these reports doesn't seem possible. I feel like creating an interface to allow the users to customize the look of the report shouldn't be too difficult, though involved in and of itself. Where I am at a loss is how to create an interface that will allow users who have absolutely no 'programming' literacy the ability to easily generate the SQL queries that will pull the information they need.
In fact, they need to be able to create these queries and access the bowels of their inputted data without ever being aware of what they're really doing. I feel for this to work as required, the generation of the report has to be as indistinguishable from magic as possible. The user should be able to drag and drop what he/she needs from sets of possible data and magically produce an report.
I'm up for the challenge of course, but I really don't know where to start. Once I get the gears moving, resolving individual issues will be 'easy' ( well actually more like part of the process), but getting off the ground has been challenging and frustrating. If anyone can offer me a direction to search in, I'm not afraid of putting in the hours. Thank you for your time, and I look forward to some positive suggestions.
I was tasked with something like this before. My advice: don't. Unless the required reports are extremely basic and your users don't care about how the report looks, it'll take a significant amount of time to implement. With you indicating your a single person team, just don't. It'd be cheaper for you(even in the long run) to hire a junior developer or intern or something to handle this part of the job.
Now, there are a few different report designers out there. I've not seen any that work completely on a web page, and all of them sucked pretty bad from the non-programmer perspective.
Now, there are ways around this. Most of the people wanting these types of reports know how to work Microsoft Access. You can leverage their knowledge of this to let them create their own reports. This isn't trivial though as you don't want them just connecting to your database. So, here's what I recommend:
Generate a downloadable database compatible with Access
Ensure that the downloaded database is "easy" to work on. This means duplicating data and denormalizing a lot of things
Ensure you don't leave anything sensitive in the downloadable database (passwords, internal things they shouldn't see, etc)
And finally, ensure they can download it in a secure manner and that it's performant. You may need to tell your users the downloadable database is only "synced" once a week or month since it's relatively expensive to sync this in real-time
Have a look at what data warehouses do (e.g. The Data Warehouse Toolkit). They create several basic table that are very wide, contain a lot of redundant data and cover a certain aspect of the database.
I would create several such wide views and let the users select a single view as the basis for a dynamic report. They then can chose the columns to display, the sorting and grouping. But they cannot chose any additional tables or views.
Of course, a typical view must really cover everything regarding a certain aspect of the database. Let's assume you have an Order Items view. Such a view would contain all items of all orders offering hundreds of columns that cover:
The order product ID, product name, regular price, discount, paid price, price incl. the associated part of the shipping cost etc.
The order ID, order date, delivery date, the shipping cost etc.
The customer ID, customer name, customer address etc.
Each date consists of several columns: full date, day of year, month, year, quarter, quarter with year etc.
Each address consists the full address, the city, the state, the area, the area code etc.
That way, the dynamic reporting is rather easy to use because the users don't need to join any tables but have all the data they need.
I'd recommend you to look at ready reporting components. For example Microsoft's Reporting Services, Telerik, DevExpress or (I should confess, our product) SharpShooter Reports
Start looking around, what kind of reporting tools are there? Is there anything out there that even comes near what they are expecting?
The tools around will be generic and your case might be rather specific. You already know part of the answer your users are looking for.
Your solution should help them that way.
You used the word "magic". That should be a huge warning sign. As a developer, we don't do magic, we do logic. We can create illusions, we can't do magic.
I would dive into Sql Analysis Services and Excel.
There is a presentation over here. These guys don't do magic either, but they are able to do a lot.
We use a combination of EasyQuery and FastReport.NET.
EasyQuery allows our users to build dynamic queries and extract the data necessary for report and FastReport - for actual report generation and exporting it to Excel or PDF.
Take a look at zpmsoftware.com. It has an open source report builder for ASP.NET MVC and outputs to screen, excel and pdf. Not to much trouble to adapt to Webforms. Since most of the fancy stuff is in jquery/javascript, adapting to other server environments should be doable.

Categories