When feature-rich is bad

Feature-rich or bloat?

Every now and then I wonder if one of the security problems facing the software industry is the huge amount of features enabled by default in large frameworks. These frameworks give developers powerful tools, classes and libraries, and give the developer the possibility to develop applications faster and better.

There are, however, two caveats related to security here:

  • Most frameworks enable too much functionality by default
  • The developer rarely know about this extra functionality.

My most common target for this criticism is PHP, but that will be in a different post. Today i will give an example of how this can affect C#-applications if you are not careful.

The problem

A web-application needs to pull news from rss-feeds from several hosts and display in a mash-up. The application needs to work in older browsers, where a cross-domain request to fetch the feed does not work.

One possible solution is to create a proxy and fetch the feeds from the webserver, and serve them to the web-application. As the origin will be the same as the javascript in the application, there are no cross-domain requests at all.

Underneath would be a System.Net.WebClient  that performs DownloadData(uri) , based on the address sent as a parameter to the proxy-service.

Everything looks quite alright (and secure) at first glance, only absolute addresses are allowed and we check the host so that only rss-feeds from the same domain can be accessed.

This proxy-service will download and then respond with the contents when the following is requested:

http://news.organization.com/feeds/latestnews.rss×

Digging deeper

However, all is not what it seems at first.

What is really easy to forget is that a uniform resource identifier (URI) does not have to be a HTTP host, and System.Net.WebClient  is kind enough (despite the name) to serve other schemes without any extra configuration…

How about accessing the same resource over SSL, no big deal?

https://news.organization.com/feeds/latestnews.rss×

But wait.. what other schemes can possibly work here?

How about something a bit more close to home?
file:///C:/inetpub/wwwroot/secret.txt×

Well.. Yes… System.Net.WebClient will happily download that file for you, if given the opportunity, but there was one extra layer of security:

It turns out that the URI used does not have a host. This prevents it from reaching the WebClient.
So where would the file-scheme get its host-attribute from?

How about using the external name for the webserver?
file://www.organization.com/C:/inetpub/wwwroot/secret.txt×

This will make the server access itself as a network-share, and makes it past the checks on the hostname, but still it does not work, there are no network share with that name…

How about using the administrative share that is enabled by default?
file://www.organization.com/C$/inetpub/wwwroot/secret.txt×

It turns out that that actually works, provided that the server is running with administrative privileges. The server should of course NEVER be configured that way, but as a software-developer these things are often out of our hands, so you cannot assume anything..

What more can we do while we are poking around? Access a intranet?

http://www.intranet.organization.com/×

Read some files of a fileshare?

file://fileserver.intranet.organization.com/files/phonebook.xls×

Yes.. provided you know the path, and if the file is accessible to the user, and if the hostname is valid, and if.. and if..

Other schemes are:

 

I will leave it as an exercise to the reader to experiment with these schemes. Most of these issues can be fixed with proper network- and security-configurations.

What’s your point?

When researching this post, I came across some examples praising this exact functionality

“That’s just too simple, but then, that’s the beauty of Microsoft’s new C# and dotNET framework. Congratulations to the Microsoft team.”

I could not disagree more, my point is:

Maybe System.Net.WebClient  should not allow these, a bit more “unusual” URI:s by default?
I think that they are more likely to be abused than used.

 

It would be better, from a security-perspective, to enable this functionality when it is needed. That is not to say that you shouldn’t always sanitize all input. But anyhow, something like this:

 

The solution

In our particular example, the URI:s are probably rather static and a simple white-list could be used.

In other scenarios,  you could verify the scheme of the URI, but it does not prevent access to internal webservers.

Conclusion

With great power comes great responsibility

The lesson is to try to be aware of that piece of extra functionality that is included in frameworks that you are using.

Much of the responsibility lies on creators of frameworks: Don’t enable every exotic feature by default, and be sure to document any extra functionality. If anything is likely to be abused,  give proper warnings in the documentation. No one ever uses frameworks with poor documentation, right? 😉

“Hey, let’s be careful out there”