In Cypress, should assertions be kept within test blocks themselves, or should they be in helper functions/automation steps [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 18 hours ago.
Improve this question
I recently started working on Cypress and I see a lot of assertions inside of helper functions/automation steps, this seems to me like it would be a bit of an anti-pattern since it would repeat these assertions in every and any other test that is also using that automation step, e.g. logging in,
Having this should in the helper function/step will cause every other test which is logged in to also have unnecessary assertions I would think, and longer functional tests would begin to have a lot of assertions which aren't needed for a specific test. Am I overthinking this issue, or is it valid? I also was interested if there were any performance impacts of this, I assume it's pretty fine for a simple one like this, but for much longer multi-step tests, would this begin to be a concern? I couldn't find any other resources on this specifically, so I would love any insight or direction if I'm overthinking it, or it is something to refactor and consider. (For reference as well, app actions and component testing are not in use here, and not currently an option)
cy.get('[data-cy="login-email"]').type('username')
cy.get('[data-cy="login-password"]').type('password')
cy.get('[data-cy="login-btn"]').click()
cy.get('[data-cy="profile-btn"]').should('exist').and('contain', 'username')
My fix would be to put assertions into the respective test case's IT/Specify block, instead of the helper functions.

I work with BDD tests. I often separate verifications from helper functions but if running time allows me I include verification steps at least for all forms/modals/pages which are used in test.
What is more, I include should() assertions to functions which return locators the following way:
cy.get('explore-button').should('be.visible')
if it doesn't break the logic, of course (if any element can be hidden by purpose or is not visible yet)

Your question is valid. You can create a custom action for login, and indeed when working on bigger applications the impact of performing the login as an assertion gets more noticeable as we might cause some performance issues and make our test suites more time-consuming, plus this would mean that we have a dependency between our tests which is generally a bad practice:
Tests should always be able to be run independently from one another
and still pass.
So that's why the documentation suggested this talk: https://youtu.be/5XQOK0v_YRE?t=512
Which talked about these recommendations:
Don't use the UI to build up the state, use it directly
Don't share page objects to share UI knowledge, write specs in
isolation
Don't limit yourself to trying to act like a user, you have native
access to everything
So basically you might make your login command use your authentication API endpoints and save the JWT tokens in your state and then just pull it from there instead of re-running your login assertions and actions.
Cypress.Commands.add('login', () => {
cy.request({
method: 'POST',
url: 'BASE_URL/login',
body: {
user: {
email: 'bar#foo.dz',
password: 'a_pwd',
}
}
})
.then((resp) => {
// if your application loads the jwt from localstorage for example
window.localStorage.setItem('jwt', resp.body.user.token)
})
})
Then in your tests:
beforeEach(() => {
cy.login()
})
This might be challenging if you want to test social logins, for that please have a closer look at:
https://docs.cypress.io/guides/references/best-practices#Visiting-external-sites
https://github.com/cypress-io/cypress-example-recipes#logging-in-recipes
https://docs.cypress.io/guides/end-to-end-testing/auth0-authentication

Related

What's the added value of using Observable over normal Array? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I've been using Observables in Angular in the state layer to store the app data and to share that data among the components of the app. With a belief, using observables, that data would passively update itself in the template whenever it changes, without checking it manually.
So out of curiosity, I've made the following demonstration to see the result of not using Observables: stackblitz
It turns out that the template passively updates itself by using a normal array instead of using observables.
I'm wondering, what's the added value of using observable instead of a normal array to store/share data in the angular app?
Use RxJS Observables when
Composing multiple events together
Adding Delay
Clientside rate limiting
Coordinating async tasks
When cancellation required (although you can use abortController for modern Promises)
they're not good for
simple button clicks
basic forms use
Hello world apps
UPD: RxJS shines when you need fine-grained control over the stream of events (adding delays, or using operators like debounceTime/ThrottleTime, etc.). If you only intent to apply some changes per event (without having control over the stream itself), then reactive approach probably not the best option.
Although I feel obligated to state that you can implement the same logic (~control over stream) without reactive approach, but it would be much more error-prone and not easily scalable).
In your particular hello-world app reactive approach isn't the best option (IMHO). If you want to see the power of RxJS by your own eyes, you may try to write an app with search field, where the app should wait some time before sending actual request (limiting the amount of request: they shouldn't be made for every new letter, look debounceTime for reference) and also requests shouldn't be made for the same inputs (users typed something, then changed theirs mind and returned to previous input, and all of it within the range of, suppose, 600ms. Look distinctUntilChanges).
You'll see that the non-rxjs alternative is much more verbose and difficult to read (and, more importantly, scale).

Redux Alternative [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have recently started learning React and I quickly encountered the problem of a bloated parent component full of functions and state. At first I looked at Flux and then I looked at Redux but both seemed like really overbearing solutions.
I am wondering why something like this is not done:
//EventPropagator.js
let EventPropagator = {
registerListener(listenerObject){
if (this.listeners == null)
this.listeners = []
this.listeners.push(listenerObject)
},
fireEvent(eventObject){
let listenerList = this.listeners.filter(function(listener){
return listener.eventType === eventObject.eventType
})
listenerList.forEach((listener) => {
listener.callback(eventObject.payload)
})
}
}
export default EventPropagator
To use: components simply registerListener and fireEvent:
//main.js
import EventPropagator from './events/EventPropagator'
EventPropagator.registerListener({
"eventType": "carry_coconut",
"callback": (payload) => {
// actually carry coconut
this.setState({"swallow_type": payload.swallow})
console.log(payload)
}
})
EventPropagator.fireEvent({
"eventType": "carry_coconut",
"payload": { "swallow": "african" }
})
This would allow a lot of decoupling and state could easily be passed around with this sort of event. What are the downsides to this approach?
You should try mobX
mobX is a state management library which took the advantage of decorators and succeeded in removing unwanted code. For example, there is no concept of reducers in mobx.
Hope this helps!
There is Reactn based in hooks.
It is orders of magnitude easier to use, compared to Redux.
Check https://www.npmjs.com/package/reactn and read Stover's blog.
Redux is a continuation of React's declarative programming Style. A simple way of mapping your application logic to components is to use something like Backbone.React.Component but using redux, will let you use all the tooling & middle-ware. Also have indefinite transitions in your apps makes debugging a lot easier. Now, I agree that you need a lot of wiring & boilerplate to get anyway.
If you want to use redux, you could look at something like redux-auto
Redux-auto: Redux made easy (with a plug and play approach)
Removing the boilerplate code in setting up a store & actions. why? This utility to allow you to get up and running with Rudux in a fraction of the time!
You can see now a redux-auto: helloworld project
If you want to take advantage of all the Redux benefits, use Redux.
If you only want to keep a var on sync between React components, use Duix.
I'm the Duix's creator. I've been using it for 1 year, and finally, I released it a few hours ago. Check the doc: https://www.npmjs.com/package/duix
You should try Hookstate. It gives the same simplest API for local, lifted, and global states. First class typescript. DevTools support. Extendable with plugins. Comprehensive documentation. It has got state usage tracking feature and smart re-rendering of only deeply nested affected components. This makes it the fastest state management solution for large frequently changing states. Disclaimer: I am maintainer of the project.
What is needed is an event driven system so a child component can speak to its sibling but this does not mean that data should be passed as the events payload. That would lead to a system of no governance and a nightmare for a project where multiple developers are working on it.
So long as you follow the flux-like architecture pattern (google it) then you actually can do this successfully with just javascript and Javascript objects to hold the data along with the event system.
Data should be treated as being one of three different states. It's either
original data pulled from the server
transformed data (transformed from the original)
Store data. That which acts as the model which the UI is bound to
If you write a javascript library to handle the movement of data between the three states and handle the transforms you can use the event system to to fire a "store changed" event that components can listen to and self-rerender.
https://github.com/deepakpatil84/pure-data
Note: i am the author of this
Using your EventPropagator solves the problem of communication, but does not handle data integrity. For example, if there are more than 2 components listening to same event, and using that event's payload to update their own states. The more components listen to same events, the more complex it is to make sure that all those components have same data in their own states.
Secondly, Flux/Redux provides uni-directional data flow. A simple example could be about 2 components A & B, consuming same data X from external source (API or storage). User can interact with either of them to get latest X data. Now if user asks B to update X, we have 2 solutions with your EventPropagator:
B will update X itself, and fire an event to inform A about the update to let A re-rendered. Same goes for A if user asks A to update X. This is bi-directional data flow.
B will fire an event to A asking to update X, and await A to fire an event back to get updated X. If user asks A to update, A will do it itself and inform B. This is uni-directional data flow.
Once your number of components goes large and communication is not limited between only A & B, you might need to stick with only one of those 2 above solution to avoid your app's logic going wild. Flux/Redux chooses the second, and we are happy with that.
If you really don't think you need another library for data control, you should look at the newest feature of React: Context. It provides most essential features that a data control library can have, and it's React only. Noted that you have to update your project's React version to 16.3 to use this feature.
Another modest multi-store alternative is react-scopes
Here are some features:
Simple decorators syntax,
Allow defining independent store logic directly in Components
Easily connect stores through the Components tree
Allow templating complex / async data process
SSR with async data
etc
I wrote Reducs. It's only 25 lines of code, it doesn't require boilerplate, and... did I mention that it's only 25 lines of code?
I am using it for a big project, and it's holding up very well. There is a live example that uses lit-html and Reducs.
I faced similar issues, with lot of switch, case, etc...
I create for myself something as alternative, you could take a look:
https://www.npmjs.com/package/micro-reducers
Take a look on JetState for state management with RxJS. I wrote this for my Angular projects and later adapted it for using with React's hooks.
PullState I really want this underdog to shine. Tried other similar ones which promised quick and no-boilerplate-ness, none came close to the ease of use of PullState. I just read the first page of the documentation and implemented it in one of my pages and it just worked. It is easy to understand if you already using functional components in react.

Backbonejs, Event Aggregation and Instantiating Views [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
tl;dr
To what extent is the Event Aggregator pattern duplicating what Backbone.Router already does? Wouldn't it be better if Views didn't communicate with each other, even with events?
The trouble
I'm having a conceptual problem with the use of an Event Aggregrator in a Backbone application. My very basic implementation is derived from Derick Bailey's post about it.. I'm using it in Requirejs and I just define a module that extends Backbone.Events and require it in the modules where I need it.
A bit of background: The problem I was trying to solve in the first place was how best to handle transitions between views in my app. Especially automated transitions. It was clear from my reading that two obvious methods were discouraged:
Manually firing a navigation callback on the router with something likerouter.navigate("login", { trigger: true })
Manipulating the browser window.location.replace("/#login") and thus triggering the relevant routing callback.
As far as I can tell, the main objection to both these methods is a loss of the proper separation of concerns. As I can certainly see the value in avoiding Views having to know too much about each other, I looked about for a better way and found the Event Aggregator.
The problem I'm facing with the Event Aggregator, though, is made quite obvious from Derick Bailey's example on his blog post. The views that respond to the events are already instantiated. Any view that is not already instantiated will naturally not respond to any events. So if, for example, I want an event to instantiate a new View as a result of some logic, I either:
Instantiate a new instance inside the current View. In this case I lose most of the advantage of the approach, because now my Views need to know about each other again.
OR
Create some kind of central event-handler that instantiates new Views based on specific events. Which sounds a lot like the original Backbone Router to me. It seems like I could just separate the logic for each Router callback into various Controllers and I'd have the same clarity.
My second problem is perhaps more general. I feel like the Event Aggregator encourages active inter-communication between Views: admittedly in a decoupled way. Ideally though I'd like to keep that to a minimum. Really I want my user to perform fairly self-contained actions within a defined area and then transition to a new area. Am I being naive, in hoping to sustain that?
I feel like I must be missing something obvious, but right now it feels like the Event Aggregator solves the problem of inter-View communication (which I'm not sure I want), but ends up replicating the Router with regard to View management. The problem I have with that is that I'll have two Event systems acting in parallel.
What I'm really looking for is a sensible, reasonably lightweight pattern for handling the instantiation of Views using the Event Aggregator. I know I could look into Marionette, but I'd prefer to understand this on my own terms first, before diving into another Framework.
Oh backbone, confusing as ever. Everyone still has differing opinions on how a good application should work.
So here's my opinion.
For your first problem, have you considered just using an anchor tag in one of your templates?
Login
For your second problem, I'd take a page from other frameworks. Before our company switched away from backbone/marionette we tried to adopt the flux (redux) architecture on top of backbone. It has worked out pretty well for some applications that stayed on backbone.
Adopting that architecture just meant defining some best practices. I think the following rules could work for you:
The event aggregator will be the only thing that can fire route changes directly.
At minimum 2 levels of views (or objects) are needed for any page.
The top most views is called a "smart" or "highly coupled" view (or object). It will allow you to hook into the event aggregator and then pass data or callbacks downward. They can represent layouts or can be mostly devoid of style and just transform event aggregator callbacks to something the generic views can work with.
The 2nd level and below should be designed in a decoupled way. We can call them "generic" views, and they can accept handlers as arguments. This is how they communicates with the outside world.
This means, views have APIs and smart views need to adhere to those APIs.
Quick Example:
function init() {
// this level is highly coupled, and knows about the events
// that will occur
var aggregator = new Backbone.Events();
aggregator.on('gotoLogin', function() {
router.navigate("login", { trigger: true });
})
function gotoLogin() {
aggregator.trigger('gotoLogin');
}
// the Todo class is generic, and defines an API that expects
// a gotoLogin callback
new Todo({
el: '.js-todo',
gotoLogin: gotoLogin,
});
// the Header class is generic, and it also expects
// some way to navigate to the login screen.
new Header({
el: '.js-header',
gotoLogin: gotoLogin,
});
}
If you are building your application as some sort of single page app. This will cause you to nest views, which isn't the end of the world, but it does become tricky with backbone, but is still doable. In fact, you may be able to automate creation of child views (see marionette). I bring this up because it seems vaguely related to the original question around de-coupling. I think the nested view problem in backbone is particularly problematic.
I believe people over-emphasize reusable code and decoupling. Honestly, there needs to be some amount of coupling. Finding an acceptable level is difficult. Defining it when everyone using backbone has differing thoughts on it, is very difficult. My recommendation is that you and your team should figure out a system that works well for you in backbone land and stick to it otherwise chaos will surely ensue.

Too many short PHP files [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I searched a bit around before realizing I need some more info on this. I have an interactive website using ajax calls to get info from server. The (authenticated) user will click around quite a bit to manipulate this data, and it will be continously uploaded to server. The thing I'm not sure about is how to do this in the best possible manner, so that my project is scalable if necessary in the future.
To give an example. User logs in and a list view is filled up with data from server. If the user double clicks one of the elements in the list view, he can change the name, and the change should be uploaded to the server immediately. What I've done now is make a file called "changeName.php" which gets called. If the user clicks something else, let's say there are ten different buttons that each changes a particular setting. How would I go about uploading all of these different data changes without having ten different php-files all doing there own little thing? I hope I explained things well enough, but if something is confusing, I'll try my best at clarifying.
Rather than POST the changes to one PHP file, you can pass the request to a single php file that handles multiple operations. Include the type of request in the POST data itself. Then for instance that file can use a switch statement on that key to determine which operation to perform, and then call a particular function in the file to process the data.
if ( isset( $_POST['_action'] ) ) {
switch ( $_POST['_action'] ) {
'delete':
deleteRow();
break;
'update':
updateRow();
break;
'default':
foo();
}
}
// Functions omitted
Note that as others have said, using a proper MVC framework will make this much easier for you and will have other benefits (security, extensibility, etc). In particular, you can have a framework to take care of routing requests for you.
In a small or prototype project where you don't want to rely on a framework, using a method like the switch statement above can at least get you started.
The best way to keep this organised is to use a proper framework. Laravel is a great option for this and will help keep your code well organised and maintainable.

Is it acceptable to call a function from a function is Node.js? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm learning Node.js by converting an existing Rails project. It's taken some time to get my head around callbacks and the async nature of Node.
The first general attempt looked ok, my RoR functionality was replicated and general speed was much faster using Node.
Having watched the nodecast here I started wondering whether my creation went against all the Node principals.
I have some function which, depending on the results call other functions. Simple, if, else if, else logic.
What I'm trying to figure out is if it's OK to do this or whether I should using something like the async package.
One particular function looks like this:
checkAuthorization: function(socket,callback) {
client.hget(socket.mac, 'authorized', function(err, val) {
callback(val);
if (val == null) {
checkExteralAuth(socket.mac, socket.serial, function(val) {
data = JSON.parse(val)
authorized = (data["live"] == 'yep') ? true : false
setRedisHash(socket.mac, 'authorized', authorized);
});
};
});
}
Is there a mode "node" way of doing this?
Why wouldn't it be? Node.js is a JavaScript platform. It's perfectly acceptable to call functions inside functions in a functional programming language. You do it all the time.
That said, sometimes you might want to avoid simply calling the function, especially with callbacks. The way you're calling the function, referenced by callback, implies that unless that function was explicitly bound to some context (using bind), you're losing the context.
Instead, callback.apply(this, [val]); might be the preferred way of going about your business.
Another thing is that since you're passing a callback function to be called later on, you might want to seriously look into using async as much as possible. I don't have an awful lot of experience with node.js, but by its very nature you'll find yourself writing callbacks, handlers and just using general async trickery all over the place.
Having a package that makes that job a lot easier is always handy...
There isn't anything fundamentally un-node-ish in this code, most libraries you will see are written exactly like this.
However, when you're working on an application, which tends to grow rather quickly, you eventually run into what is called the "callback hell", where the level of nesting reaches a level where you can have dozens of nested anonymous function calls.
This quickly becomes unmaintainable, which is where control flow libraries come in. There are several of the, the most popular ones being async and q.
async can be dropped readily into any codebase to manage callback hell, q requires architecting your code to work as Promises, but results in generally nicer API.

Categories