Where is load-opportunities and diagnostics in lighthouse api? - javascript

I am showing website audits in my website using google lighthouse npm. I am getting the response.
But where can I find load-opportunities and diagnostics ? I am not able to find these two groups in whole respose
I have seen the answer, according to the first answer I should be able to find load-opportunities and diagnostics in auditRefs group. But there are no grops like these two. I only can see metrics, budget hidden group.
According to another answer I can find opportunities in type=opportunities which I found, but where can I find diagnostics ? There is no type like diagnostics?
can anyone help me in that ?
The response is about of 40k lines so can not attach it

To view the "Load opportunities" and "Diagnostics" categories in a Lighthouse report, you can do the following:
Run a Lighthouse audit for your website using a tool that supports these categories, such as the Lighthouse Chrome extension or the Lighthouse CLI.
After the audit is complete, open the Lighthouse report in your web browser or the CLI output.
Look for the "Load opportunities" and "Diagnostics" categories in the report. These categories are typically listed alongside other categories like "Performance", "Accessibility", "Best Practices", and "SEO".
Click on each category to view the specific opportunities and diagnostics that were identified by Lighthouse.
If you are still unable to find the "Load opportunities" and "Diagnostics" categories in your Lighthouse report, it is possible that they were not included in the audit options that were selected. In that case, you may need to run a new audit with the desired categories selected.

Related

How do I generate a summary report for each scenario at once on K6 tool?

I could run some performance tests successfully with K6. However, I've been trying to generate a single summary report for each of the 4 scenarios at once, but I couldn't. The workaround is to keep a single scenario (comment the others or remove them), run the test and generate the summary report. Then, exchange the scenarios and repeat the previous steps.
Is there any approach that I could follow to generate 4 summary reports, each one for each scenario with a single run? I did that, but I got a single summary report without split the number for each scenario.
Unfortunately this is not easily possible right now.
One creative solution to avoid manually commenting out and rerunning the script is to use an environment variable to conditionally enable certain scenarios. Take a look at this example on the forum.
The summary report is just the result of some handy calculations based on the test's metrics, but if you don't mind calculating those yourself all metrics have a default "scenario" tag, so you could filter the metrics per scenario in whatever output system or processing tool you wish to use. For example, you could do the calculations with jq if you export the results to JSON, or in a Grafana dashboard using InfluxQL, etc.
You also might be interested in recent changes to the summary report (tentatively landing in the upcoming v0.30.0), which will make generating the report much more flexible. Separating it per scenario isn't currently planned, but feel free to propose the feature in a GitHub issue and we can discuss it there (disclaimer: I'm one of the maintainers).

I want to track changes (review comments/edits) made to a document that we are displaying on a web page.

I want to track changes (review comments/edits) made to a document that we are displaying on a web page. What is the best way to do it other that Google Docs/Dropbox and Microsoft Word Online?
Need more info, at this moment I think that you and your team has shared document, that you are editing together and you want to track changes. I can propose for you three easy solutions.
Learn markdown(it's readlly simple) and use source control, for example git. For learning markdown and git I think you need about 4-12h. From my point of view it is very good, free and easy solution.
Small intro to git - minimum set of commands, also you can use UI client, but I think better to learn with console and than start using UI
git init
git remote add origin master https://github.com/yourRepository
git add *
git commit -m "description of the update"
git push
Another one is evernote. Evernote allows you to create shareable documents(notes), similar to google docs, and in premium account(costs 70 $/year) you can make snapshots of the notes and later compare them similar to version control. The big advantage of evernote - it is easy to use and you will get UI similar to word and other familiar to you(I guess) text editors. Moreover overnote have a lot of functionlity for team organizations. I steek to evernote in a week.
Attlassian Confluence. I can't tell you lot, because personally I'm not active user of(usually I'm only read content), but I can say it is great tool. Also it is used in my team. Costs 10$/20$ a month.
All solution will allow to have shared, organized documents with tracking and managing changes history. For my personal organazation tasks evernote and git were enough, switched to evernote because it more human friendly, but source control and especially atlassian provide much more powerfull, usefull posibilities.

Salesforce data loss prevention - can I embed JS on standard salesforce pages?

I'm looking into possible ways to control and monitor data leaving our Salesforce Org. Currently solutions appear to fall into two broad categories:
Lock down features for users using profiles. E.g. Prevent certain kinds of users from having reporting, or exporting rights
Have third party monitoring software installed on work machine which monitor and control interactions with salesforce.com
Neither of these suits our requirements. Our problem is with users who need to be able to run and extract reports but doing so from some internet cafe. We also can't restrict them to work machines as a lot of them are travelling salespeople.
Furthermore, salesforce have said they don't provide any kind of report on what reports have been run, or what data exported.
I'm investigating the third possibility which is bolt some sort of monitoring JS code onto salesforce.com itself. If possible, I'd like to embed JS on the salesforce Report tab (and any other page where data can be exported) and intercept clicks to the "Run Report" or "Export" buttons. I'd call a logging web service with the user's name, the report parameters, time etc.
Does anyone know if it's possible to embed custom JS on salesforce pages? Or any neater solution to the above?
Thanks for your help
Ray
Salesforce is very protective of their code base, to that degree that even custom Apex code runs on a completely different domain so that they can use cross-domain scripting to prevent us from tweaking their pages :) So unless a man-in-the-midddle SSL attack is used there is no way to inject something in their code.
Maybe a grease monkey script? But users could remove them or just use another browser.
I do not think you have an ideal solution here other than security, either profile (object level) or sharing (row level). Think of it this way, someone keen on stealing data could just grab HTMLs of detail pages of rows participating in report, grabbing raw data from HTML and running reports externally. Maybe force traveling salespeople to use RDP to office located machines?
Another option would be to make a subset of your reports info visualforce pages (write out the SOQL and apex needed to gather the data, then write VF markup to display it) and then log and/or restrict who can access those pages and when (including checking the source IP).
There's clearly some serious effort involved in moving more complex reports over to this format, but it's the only way I can think of meeting your requirements.
This would also allow you to not include any sort of export options, although they could of course save the raw HTML of the page if they really wanted to.
To embed javascript in a standard SFDC page go to "Your Name" => "Setup" => "Customize" => "Home" => "Home Page Components" => Click the edit link next to "Messages & Alerts". In the "Edit Messages and Alerts" page there is a text area that you can paste javascript code that will be excecuted on almost every salesforce page.
Here are a few notes when doing this
Do not put empty lines in your code because the system will add a p html tag in it.
Use absolute references to your Salesforce pages because managed packages have a different url structure.
I'm not sure how much longer salesforce will allow this but it currently works.
For more info see http://sfdc.arrowpointe.com/2011/11/18/email-autocomplete-using-jquery/
Slightly different way of doing it https://salesforce.stackexchange.com/questions/482/how-can-i-execute-javascript-on-a-sfdc-standard-detail-page

Scrape gmail for the last time external pop accounts were checked and check them if longer than X time since the last check

Goal:
To develop an script that will check the last time my external pop accounts were checked by google -- while not being logged in. If the time exceeds some amount, then check the pop account.
My Reason:
I use an offline client. I don't want to be logged into gmail and I want all my external emails to flow thru gmail. Sometimes an important email comes in and I have to log into gmail, go to the account section, and then click "check email". This is incredibly annoying. I wish they had the ability to poll for pop account at a specified frequency. Instead they use an algorithm that can range from 1 minute to 1 hour.
My Approaches so far:
So I can log into gmail using curl. I can scrape the pages. The problem is that google uses javascript/ajax goodness so curl does gets the html version of gmail and that version does not have the info that I am looking for. It's only available on the ajax version of gmail.
I can use selenium, but essentially I have to have firefox open. I don't want that. I want a solution that can run in the background that will check every 10 minutes.
My suspicions on how to go about this:
I've seen several posts about using headless browsers with javascript capabilities. Apparently some of these can be controlled using python. However, this seems quite complicated.
Thus, my questions
What is the best way to solve my problem? My preference is to use python, but I am open to other languages as well. Will I have to use javascript to accomplish this task? Is a headless browser necessary or are there other alternatives?
Thank you.
Probably http://www.phantomjs.org/ is going to be the best tool for this job. They have lots of examples in their github repository for how to do this type of thing. People have had good success with complex scraping tasks.

3rd party tool that can generate a list related articles for a japanese-language site

I run a site with a large number of news articles. I'm looking for a 3rd party tool(or widget) that, when placed on an article page, would generate a list of related articles within the same site.
So my requirements are:
Returns a list of links to related articles
Has to be integrated front-end (javascript,ajax,etc)
Has to sustain large traffic and display results quickly
Most importantly, must support Japanese language content
Any ideas on tools, widgets, services out there would be great - thanks!
nRelate has this exact product as a simple WordPress plugin - used both for related content and for most popular content.
http://nrelate.com/install-products/
Gravity also does something similar, but I believe you have to be a large website to gain access to their tools.
http://www.gravity.com/publishers

Categories