Selenium-webdriver crawler for Service-Now, Automating the search and opening of a task with python - javascript

I've been trying to get a selenium script to run on service-now.
What I need it to do is just to open SNOW and in my to-do list look for the first task called with a short_description of this exact string "Set Initial Account Passwords".
When I try going to inspect the element I have to do it twice, because the first time it doesn't expand where the element is.
If I copy the selector it looks something like this #row_sc_task_dc9cxxxxxxxxxxxxxxxx > td:nth-child(5)
If I copy the element it's this one: Set Initial Account Passwords
NOW, when I try to zero on that element it never finds it.
I even tried logging every element and sub element, every selector, I had it expand every table and whatever I could think of in a file.log and still, can't find either the sc_task, nor the string "Set Initial Account Passwords" nor the "ng-non-bindable"
I've tried everything I could find to read everything in the site, even to read what one can read as text and nothing.
Has anybody had to deal with this at some point?
I'm trying to automate an action and everything works fine, it's just this blessed thing that won't open tickets from the list and, having to specify each task number or what have you will just defeat the purpose of having this automated.
OH, and, I've already tried chatgpt and it's no good, it won't do what I need.
Here are 2 screenshots of what this looks like if that could help in any way.enter image description here
[enter image description here][1]
[1]: https://i.stack.imgur.com/XUeQw.jpg

Related

Prestashop pagination weird behaviour of previous button

im working on building shop with Presta,
I have problem with previous button in pagination at category page, it always adds something to href, it wouldn't be a problem but i wrote javascript to operate things like changing products per page or category filters by changing links.
Lets say my actual link is something like
localhost/index.php?id_category=10&controller=category?resultsPerPage=18&page=2
Normally previous button should have href to
localhost/index.php?id_category=10&controller=category?resultsPerPage=18
But in my case it has href to
localhost/index.php?id_category=10&controller=category?resultsPerPage=18y
I tried to make js script to slice last letter from href, but it just doesnt work, there's always something added to this href, letter isn't problem it just looks weird but site as far as i tested works fine, but if it is digit it can result in change properties of for example resultsPerPage. I wonder if its a problem with Pagination.php and its' link building method, because in my scripts i have nothing that could couse this problem, maybe anyone had same problem and knows any solution?

Find XPath for element with dynamic location on page (jump straight to element?)

I am trying to write Selenium tests for a webpage reloading alerts every so often as the database changes frequently. The alerts are in a <ul> as an <li>, and all have unique titles as an <h4> element below the <li> (with a lot of divs above this).
It'll look something like:
<h4 title="Title" class="message-title alarm-list-name ng-binding">Displayed Value matches Title Value</h4>
When I use Chrome's "Copy XPath" functionality, I get paths like //*[#id=": 0"]/a/div[2]/h5[1], but of course this changes every time the page reloads (and I still am getting "no element found" when I use this one). I want to know if there's a way to jump straight to the <h4> with its particular value, as this would be the easiest way to handle this problem, I think. I've tried //h4[#title="Title"], but I get no element found.
I've also tried driver.findElement(By.linkText("Text")); with similar results. I tend to think xpath will be better because it seems more versatile.
If you have other Selenium recommendations, I'd appreciate it if you could demonstrate using them with the webdriver plugin for Jmeter in javascript, since that's the tool I've been using.
A bit of more information would have helped us to address your query whether you want to extract the text Displayed Value matches Title Value or you want to want to invoke any method on it.
However as per the HTML you have shared as the WebElement is a Angular Element so you need to induce WebDriverWait with proper ExpectedConditions as follows (Java Solution):
To extract the text Displayed Value matches Title Value :
new WebDriverWait(driver, 20).until(ExpectedConditions.textToBePresentInElementLocated(By.xpath("//h4[#class='message-title alarm-list-name ng-binding' and #title='Title']"), "Displayed Value matches Title Value"));
System.out.println(driver.findElement(By.xpath("//h4[#class='message-title alarm-list-name ng-binding' and #title='Title']")).getAttribute("innerHTML"));

Getting value from a div element in a online/running site

So Lets say I have a site like this one http://www.worldtimeserver.com/
and I want somehow to get the value of the time every second and write that value in a .txt file. I was going to use a OCR(optical character recognition) software for this job but ..... in the end this option wasn a good choice because I could not rely on exact position of the clock.
Then I started to think "is there a way to inject/put some code in the browser that would do this?". When I inspect the web-page (in Chrome) I saw that the Div containing the time has an id="theTime". So is there a way to do this? I have some basic experience in JS & DOM ... but this I have no idea how to do or from where to start. Also i would like to point out that I need to rely that the script will do this job for hours and hours and that the value of the clock is set by outside (a server).
If the value does not require the browser to refresh to change the value.. you can save it using localstorage and later copy paste it in a txt file
this is a possible duplicate.
Get value of input field inside an iframe
use an iframe (you can hide it if you don't want users to see it).

Firebug - Console - Filter information shown

I use firebug a lot but but really just the basics and I don't pretend to understand it in any detail though it is very useful.
I would like to just show the console.log entries that I have inserted in the javascript and not the other stuff. You can filter what is shown and I thought that "DebugInfo" may show console.log entries but it doesn't.
The problem with reading the console is that I have ajax requests every few milliseconds and these fill up the page in no time and it starts scrolling and it is difficult to spot the relevant info passing by.
Ideally I would like to filter out all the GET http...... and just have the console.log info, is there a way to do this or a different approach that would make it easier for me to debug? A pause button would work but having googled that it doesn't seem to be possible.
My application is on a webserver on a microchip so I don't want to set break points as I would need to reprogram often as I tested different things and that takes too long.
Beside the word "Console" in Firebug you should see a little down arrow. If you click that you get an option for "Show XMLHttpRequests". Untick that and try again.
You can also filter in Chrome's Developer Tools by clicking the blue filter icon.

Refreshing "online" users with JavaScript

I have a chat app where it shows users who are online (username + profile pic). I have an ajax poll that basically checks to see which users are still online, and refreshes the list automatically. Also, the list is ordered based on last activity.
The way I've been doing it is:
Get list of current online users
Clear existing elements
Re-add them (will be ordered correctly since the returned list from step1 is ordered)
This works fine in Chrome, but I notice in Firefox that it is causing a "flickering" effect while the images get re-added.
What is the best way to do this? It seems overly difficult to create an algorithm check which elements exist, if they are in the right order and move them, etc. Thoughts?
How often do you poll to see if users are still online?
I think the best way may be to give the user records unique ids so you can then check the list of users that were online against the new list of users that are now online.
fade away the users that have left and fade in any that have logged on.
It will be a much more elegant solution and it solves the problem you are having.
Firstly, I would try to "cache" the images separately, using the "preload" technique. That's when you create an Image object and set it's src to the URL of the userpic. Then you store all those objects in a global array. This will prevent the browser from getting rid of the images when they are no longer on the screen, so that it will not have to load them again when you reload the list.
If that doesn't help, I would actually reuse the existing list elements. I would just go over the elements, one by one, and replace their content with the appropriate content from the list. If I run out of existing elements in the process, I add new ones. If any elements are left over when the list ends, I remove them. This is a bit more complex, but actually not as complex as it looks at first glance.

Categories