Authenticating legacy apps with a reverse proxy

This blog was written by an independent guest blogger.

When we think of “authentication” for our applications, most of us think of user registration, a login form, and resetting passwords. Our concerns begin and end there. But as we dive deeper and our security and compliance requirements change over time, we have to consider new password hashing algorithms, blocking bots, multi-factor authentication, and external identity providers. What started as a clear, concise set of requirements became an ever-growing list.

For new applications, we can add an identity provider like Azure Active Directory or Okta, embed any number of framework plugins, and count on those systems to handle all the complexity and change that we’d normally have to consider within our system. It’s a quick and easy exercise and centralizes all the policy across your ecosystem.

Unfortunately, most of our apps don’t fit this nice, clean, predictable world. We have years or even decades of mission critical applications sitting in our infrastructure where the source code is lost to time, the team has moved onto other projects, and the overall system is working “so please don’t touch it!” That makes a “simple checkbox task” much more complicated. We need to rethink our approach on how we access these systems.

Enter the reverse proxy Before ngrok

In general client-server architectures, the client makes requests directly to the server. As the number of clients grows, it’s feasible to overwhelm the server and prevent any requests from being fulfilled. With a reverse proxy, we put a system in front of the server to act as a gateway. This gateway applies its own rules to the requests, confirms those requests meet those requirements, and forwards the acceptable requests on to the server.

To think of it another way, consider your favorite restaurant. Do you sit down with the menu, choose a dish, and shout it to the kitchen? No, that would create chaos and confusion. Instead, a waiter takes your order and divides it into pieces to tell the bartender your drink order and the kitchen your dinner selection. As each dish is ready, the kitchen assembles the pieces, and the waiter brings you the result. You don’t know or care if one or ten cooks are preparing your meal. The wait staff provides an abstraction layer between you and the kitchen without changing how the diner or the kitchen operate.

When we think back to our legacy application, the reverse proxy performs the same service. It acts as an abstraction layer for our security requirements and allows the implementation to change independently of the system we’re protecting. In fact, as our requirements change and expand, we can usually focus entirely on the reverse proxy and ignore most of the underlying legacy system. After ngrok

HTTP Basic Authentication with a reverse proxy

Now that we can gate access to our server, forcing authentication is straightforward. For the following examples, we’ll assume the application we’re protecting lives at 192.168.1.1:8080.

At the simplest level, we can start with HTTP Basic Authentication. With an Apache or NGINX-based proxy, you would use a command like this to create a new user named “katelibby”:

> htpasswd /etc/apache2/.htpasswd katelibby reverseproxyftw

and then load the resulting htpasswd file into your configuration in seconds. At ngrok, we can accomplish the same on the command line with this option:

> ngrok http 192.168.1.1:8080 -- basic-auth=”katelibby:reverseproxyftw”

But be careful, unlike the htpasswd approach, the ngrok command line approach is an ephemeral user and ceases to exist when this command is interrupted. On the positive side, you don’t need any additional services and components.

Unfortunately, HTTP Basic Auth isn’t usually the best option. On the surface, it’s easy to set up and maintain but it turns into an administrative headache over time. First, an admin must create each user so they normally have the passwords and - even worse - the end user can’t reset or recover their account on their own. In general, HTTP Basic Auth is really only useful for small, simple projects with minimal or non-existent security and compliance requirements.

OAuth 2.0 and OpenID Connect with a reverse proxy OAuth 2 logo

When we go deeper into the authentication rabbit hole, we quickly get to OAuth 2.0. OAuth addresses many of the self-service aspects by completely delegating anything authentication (and authorization!)-related to an identity provider.

Luckily because OAuth is an open protocol, there are implementations for every system out there. The Apache approach to auth gives you a toolbox of options to build your own while my former colleague at Okta, Aaron Parecki covers his approach in “Use nginx to Add Authentication to Any Application.”

At ngrok, we take a different approach to support some of the major OAuth providers:

> ngrok http 192.168.1.1:8080 --oauth=google

Or if we preferred to use OpenID Connect specifically, that command would change to:

> ngrok http 192.168.1.1:8080 --oidc=https://login.identifyprovider.com --oidc-client-id=clientId --oidc-client-secret=clientSecret

Regardless of which approach we take, now our underlying system is protected with OAuth 2.0 or OpenID Connect without changing the system.

Further, since we’ve outsourced authentication to a separate identity provider, our underlying system is now under those security requirements. If our identity provider requires complex passwords, multi-factor authentication, or has IP restrictions on where to allow login, our legacy system doesn’t know and doesn’t care. We get all the benefits of those policies without having to touch the underlying system.

Is a reverse proxy the Holy Grail?

Regardless of the benefits, there are a few tradeoffs involved in choosing a reverse proxy. A reverse proxy can often view and inspect network traffic as it flows between the clients and servers. The positive take on this inspection is the proxy can detect malicious activity and block it or improve speed via compression and traffic shaping. The negative take is the proxy can log the traffic potentially providing a new target for attackers. In practice, you can mitigate the logging and introspection risks by implementing end to end encryption via TLS.

Architecturally, as the gateway to our application, it becomes a new single point of failure. Therefore, we have to plan for it to be stable, reliable, fail gracefully, and recover quickly. This may require teams to learn and implement new tooling and monitoring for observability but any reasonable reverse proxy configuration will consider those capabilities core.

Fundamentally, a reverse proxy gives you control and oversight over the legacy systems living on your network. With just a little effort, you can bring it under the umbrella of your existing security practices and policies and even expand and adapt as those requirements change. Done well, a reverse proxy will let you consider non-security aspects like traffic shaping, payload/request validation, circuit breaking, and even replacing the underlying system completely and transparently to end users.

A reverse proxy does not solve all of our problems, but a single point of access gives us power.

Article Link: Authenticating legacy apps with a reverse proxy | AT&T Cybersecurity