Salesforce data loss prevention - can I embed JS on standard salesforce pages? - javascript

I'm looking into possible ways to control and monitor data leaving our Salesforce Org. Currently solutions appear to fall into two broad categories:
Lock down features for users using profiles. E.g. Prevent certain kinds of users from having reporting, or exporting rights
Have third party monitoring software installed on work machine which monitor and control interactions with salesforce.com
Neither of these suits our requirements. Our problem is with users who need to be able to run and extract reports but doing so from some internet cafe. We also can't restrict them to work machines as a lot of them are travelling salespeople.
Furthermore, salesforce have said they don't provide any kind of report on what reports have been run, or what data exported.
I'm investigating the third possibility which is bolt some sort of monitoring JS code onto salesforce.com itself. If possible, I'd like to embed JS on the salesforce Report tab (and any other page where data can be exported) and intercept clicks to the "Run Report" or "Export" buttons. I'd call a logging web service with the user's name, the report parameters, time etc.
Does anyone know if it's possible to embed custom JS on salesforce pages? Or any neater solution to the above?
Thanks for your help
Ray

Salesforce is very protective of their code base, to that degree that even custom Apex code runs on a completely different domain so that they can use cross-domain scripting to prevent us from tweaking their pages :) So unless a man-in-the-midddle SSL attack is used there is no way to inject something in their code.
Maybe a grease monkey script? But users could remove them or just use another browser.
I do not think you have an ideal solution here other than security, either profile (object level) or sharing (row level). Think of it this way, someone keen on stealing data could just grab HTMLs of detail pages of rows participating in report, grabbing raw data from HTML and running reports externally. Maybe force traveling salespeople to use RDP to office located machines?

Another option would be to make a subset of your reports info visualforce pages (write out the SOQL and apex needed to gather the data, then write VF markup to display it) and then log and/or restrict who can access those pages and when (including checking the source IP).
There's clearly some serious effort involved in moving more complex reports over to this format, but it's the only way I can think of meeting your requirements.
This would also allow you to not include any sort of export options, although they could of course save the raw HTML of the page if they really wanted to.

To embed javascript in a standard SFDC page go to "Your Name" => "Setup" => "Customize" => "Home" => "Home Page Components" => Click the edit link next to "Messages & Alerts". In the "Edit Messages and Alerts" page there is a text area that you can paste javascript code that will be excecuted on almost every salesforce page.
Here are a few notes when doing this
Do not put empty lines in your code because the system will add a p html tag in it.
Use absolute references to your Salesforce pages because managed packages have a different url structure.
I'm not sure how much longer salesforce will allow this but it currently works.
For more info see http://sfdc.arrowpointe.com/2011/11/18/email-autocomplete-using-jquery/
Slightly different way of doing it https://salesforce.stackexchange.com/questions/482/how-can-i-execute-javascript-on-a-sfdc-standard-detail-page

Related

How to achieve security and hiding code from unauthorized user on web page?

I'm creating a statistics web page which can see sensitive information.
The webpage has a sort of table which has massive data in it, editable and stored in Server's database. But It needs to be hidden before the user got proper authentications(Like log-in). (Table itself and it's code too). But I found that most of the questions in stack overflow say it is basically impossible. But when I see lots of well-known websites, it seems they are hiding them well. So I guess there are some solutions to the problem.
At first, I build a full-stack of React - Express - Node - MariaDB toolchain.
The react client is responsible for rendering contents of a webpage and editable tables and request for submitting edited content.
The node with express are responsible for retrieving data from DB, updating DB (Provides data to manipulate from client-side -- that's all)
It comes to a problem when I'm considering security on client-side code. I want to hide all content of the page (not just data from the server, but also its logic and features)
To achieving my goals, I consider several things, but I doubt if it is right and working well if I create.
Using Serverside rendering -- Cannot use due to performance reason and lack of resources available
Serverside rendering can hide logic from the user cause it omits the only HTML from the server and all actions are submitted to the server and the server handle the actions and provide its result.
So I can provide only the login page at first, and if login is successful, I can send the rest of HTML and it's logics from the server.
The problem is that my content in the webpage is massive and will be interacted with the user very often, and applying virtualization on my table (by performance reason), it's data and rendering logic should be handled by the web browser.
Combining SSR and Client-Side Rendering
My inspection for this is not sure, I doubt if it is possible.
Use SSR for hiding content of the site from unauthorized users, and if authorized, the web browser renders its full content on demand. (Code and logics should be hidden before authorization, the unauthorized user only can see the login page)
Is it possible to do it?
Get code on demand.
Also my inspection, this is what I am looking for. But I strongly doubt if it is possible.
Workflow is like below
If a user is not logged in:: User only can see the login page and its code
If the user is logged in:: User can see features of the page like management, statistics, etc.
If the user approaches specific features:: Rendering logic and HTTP request interface is downloaded from the server (OR less-performance hindering logic or else...) and it renders what users want to see and do.
It is okay not to find ways from the above idea. Can you provide some outlines for implement such kind of web page? I'm quite new to Web Programming, so I cannot find proper ways. I want to know how can I achieve this with what kinds of solutions, library, structure.
What lib or package should I use for this?
How can I implement this?
OR can you describe to me how modern websites achieve this? (I think the SAP system quite resembles with what I wanna achieve)
Foreword
Security is a complex topic, in which it is not possible to reach 0 threat. I'll try to craft an answer that could fullfil what you are looking for.
Back end: Token, credentials, authentication
So, you are currently using Express for your back end, hence the need to sort of protect the access from this part, many solution exist, I favor the token authentication, but you can do something with username/password (or this) to let the users access the back end.
From what you are describing you would use some sort of API (REST, GraphQL etc.) to connect to the back-end and make your queries (fetch, cross-fetch, apollo-link etc.) and add the token to the call to the back end in the headers usually.
If a user doesn't have the proper token, they have no data. Many sites use that method to block the consumption of data from the users (e.g. Twitter, Instagram). This should cover the security of the data for your back end, and no code is exposed.
Front-end: WebPack and application code splitting
Now the tricky part, so you want the client side not to have access to all the front-end at once but in several parts. This has 2 caveats:
It will be a bit slower than in normal use
Once the client logged in once, he will have access to the application
The only workaround I see in this situation is to use only server side rendering, if you want to limit to the bare minimum the amount of data the client has on your front end. Granted it is slow, but if you want maximum protection that is the only solution.
Yet, if you want to still keep some interactions and have a faster front end, while keeping a bit of security, you could use some code splitting with WebPack. I am not familiar with C so I can't say, but the Multiple page application of WebPack, as I was mentionning in the comment, should give you a good start to build something more secure.
First, you would have for example 2 html files for entering the front end: one with the login and one with the application. The login contains only the Javascript modules that are for entering the application and shouldn't load the other Javascript modules.
All in all, entrypoints are the way you can enter the application, this is a very broad topic that I can't cover in this answer, but I would recommend you to follow WebPack's tutorial and find out how you can work this out.
I recommend the part on code splitting, but all the tutorial is worth having a look.
Second, you will have to tweak the optimisation module. It is usually a module that tries to reduce the size of the application by merging methods that are used by different parts or that are redundant: you don't want this.
In your case, you don't want un-authenticated users to have access. So you would have to probably change things there (as well another broad topic to be covered in a single answer, since you would have to decide what you keep for optimisation and what you remove for security), but here is the link to the optimisation module and a heads up, you will have to modify the SplitChunksPlugin not to do this optimisation.
I hope this helps, there are many solutions are hand and this is not a comprehensive guide but that should give you enough materials to get to what you need.

Sharing dynamic client and server-side content between JSP's

I've done quite a lot of searching about capabilities of JSPs and have been unable to find a concise answer to my issue.
I am currently writing a web application which uses a single jsp (with an imported CSS) to build a site with multiple functionalities. Specifically, the application has the ability to read and write data from/to an external server, as well as update user content and info.
For the sake of aesthetics and clarity, it makes the most sense to designate different areas of the site to each of these tasks, one at a time. Rather than attempt to use page divisions and conditional statements to control their visibility and execution, I want to essentially "cut up" the logic behind this dynamic content and spread it across multiple JSPs to allow for more organized editing, testing, and modification of the code by both a web developer and programmer, without either necessarily having to collaborate.
Assuming that all of the JSPs involved have the appropriate imports, are there any essential measures I need to take to allow seamless sharing of content between these JSPs? For example, if a user writes to the server and stores their data as some entity, (from a page with tools from one jsp) will the functions used to query the database work if I simply copy that code into another jsp and have the user navigate to that page in order to access that persisted data?
Thank you for your feedback!
Learn the design pattern Model–view–controller. It'll great to help you solve many of your project problems. And your architecture will more maintainable and scalable. You can see here and here for more information about the implementation of this pattern with JSP.

Security on Web page that will allow user to add javascript dynamically

I have implemented a requirement in my website where I can allow my end user to configure a link, to execute any javascript that he may require. Since, he can type in any javascript that he requires he also has the ability to open different web pages, create new pages via javascript, edit elements in the page via javascript and so on.
I have some security concerns over this functionality and would like to get some opinion from everyone. Is it possible that any malicious or unethical script could be added to the page that could bring about law and order problem or credibility issues? If so, is it possible to place in some code that would restrict the type of javascript that my user may add?
There's a thing called ADsafe which was developed for banner ads that is a strict subset of Javascript which is meant to prevent malicious code. I don't think you'd be able to do things like
open different web pages, create new pages via javascript, edit elements in the page via javascript and so on
though. I think you should re-think your needs, and try to determine if you can come up with a way to offer the ability for a user to choose from pre-determined code that you write, perhaps customizing it within certain bounds.
Then again, if you're absolutely sure that the javascript is only going to run for the user who entered it, there shouldn't be anything they can do that will screw it up for anyone else. If a user was determined he or she could simply inject their javascript in through other means, like a rewriting proxy or extension or simply the javascript console.

What precautions should I take before I let client add javascript to a webpage?

Question: What precautions should I take when I let clients add custom JS scripts to their pages?
IF you want more details:
I am working on a custom CMS like project for a company, The CMS has number of "groups" that each subscriber "owns" where they do their own thing.
The new requirements is that some groups want to add google analytics to see how they are doing. So I naturally added a column in the table and made code adjustements so if there is some data in that column, I just use the following line in master page to set the script out:
ScriptManager.RegisterClientScriptBlock(Page, typeof(Page), "CustomJs", CustomJs, true);
It works just fine, only, It got me thinking...
It's really easy for someone with good knowledge of how to access cookies etc from from js. Sure, each group is moderated and only super admin can add this javascript, sure, they wouldn't be silly enough to hack their own group. Each group has their own code so its not possible to hack other groups BUT STILL
I am not really comfortable in letting user's add their own javascript codes.
I could monitor each group myself, but the groups are growing really quick and I will hit a time when I will no longer be able to do that.
So, to brief it up: What precautions should I take to avoid any mishaps ?
ps: did try to google, no convincing answers anywhere.
Instead of allowing the users to add their own Javascript files, and given that the only requirement here is for google analytics, why not just let them put their analytics ID into the CMS and if it's present, output the relevant Google Analytics code?
This way you fulfill the users requirement and also avoid the need to protect against malicious scripting.
Letting users use Javascript is in general, a very bad idea. Don't do it unless you have to.
I once I had a problem where I need to let clients use Javascript, but, the clients weren't necessarily trusted, so, I modified cofeescript so that only a small subset was compilable to javascript, and it worked pretty well. This may be waaaay too overkill for you.
You should not let your users access cookies, that's always a pain. Also, no localStorage or webSQL if you're one of the HTML5 people, and, no document.write() because that's another form of eval as JSLint tells you.
And, the problem with letting people have javascript is that even if you believe you have trusted users, someone may get a password, and you don't want that person to get access to all the other accounts in the group.
Automatically recognizing whether some JavaScript code is malicious or sandboxing it is close to impossible. If you don't want to allow hacking your site you are left with only few options:
Don't allow users to add JavaScript at all.
Only allow predefined JavaScript code, e.g. for Google Analytics.
Have all custom JavaScript inspected by a human before it is allowed to display on the site. Never trust scripts loaded from third party sites - these can change from one day to another and turn malicious.
If you have no other choice, you may consider separating path/domain of user javascripts (and cookies).
For example your user have page:
user1.server.com
and you keep user pages at
user1.server.com
So, if you set session cookies to the user1.server.com, it'll render them unobtainable for user scripts from other domains (e.g. user2.server.com).
Another option may be executing all user's javascript at server JS engine (thus controlling all it's I/O and limiting access to browser resources).
There is no simple and easy solution anyway, so better consider using options from other answers (e.g. predifined script API, human inspection).

Does googlebot recognize an HTML <title> tag altered by javascript ?

I have a search engine on my website and it works via ajax. I want to have a specific <title> for each search attempt. To achive that I have to alter every time after I recieve a response from ajax.
Do you have any idea if googlebot will see this altered and use it to index my webpage?
Thanks for any help!
Do you have any idea if googlebot will see this altered and use it to index my webpage?
Most likely not.
You should change the title on server side.
Google bot does something similar to opening the page URL using notepad. It will see the JavaScript code as a plain text, which tries to change the title; but it will not see the result of the script execution of course.
EDIT:
Ajax enabled web pages are crawled using the same principle, unless they follow the techniques for Ajax-enabled web sites, as suggested by google:
AJAX crawling: Guide for webmasters and developers
Well, google added many features to its search engine over past years, and probably it will be able to see the changes. But how do you imagine a client should reach a page which address doesn't change, but content does after few clicks? You must combine AJAX with normal separate pages; this will also add compatibility for clients that have JavaScript disabled. E.g. have all pages redirect to the one working with AJAX if JavaScript is enabled and user-agent string doesn't match *bot*.
Simply, Google will not index any dynamic content from your page.
As Slava said, Google added many features to its search engine over past years, and probably it will be able to see the changes. But even if Google does eventually start indexing dynamically changed content, I think it is still uninteresting from a search engine optimization standpoint that those content will not be indexed as quickly as the others served from server.
It's important to know what you're getting and what you're losing. Yes, you may be adding functionality to your page easily and enhancing the user experience, but if you don't get the data indexed, you lose all that juicy keywordy goodness. :)

Categories