More features for your corporate network: a hacking-based approach (I)
Recently we have come across a somewhat curious request, and after analyzing the problem we have offered a rather unusual solution. Of course, thinking of those who do not know the true meaning of "hacking", all within the law. After all, it is nothing more than improving the work that should have been done by the manufacturers of a commercial router.
Let us explain. It turns out that a small company has retained (inherited problem) the initial infrastructures acquired, being involved in a process of corporate growth that has made migration and evolution difficult for them, according to the options initially chosen. These have generally involved the acquisition of low and medium-cost networking devices, such as routers and switches. Their purpose solved most of the needs that were required, until this was no longer the case. We are not really talking about something as drastic as the denial of access or the impossibility of deploying prototypes and services, since they would have acquired new infrastructures without thinking about it. But it has hindered collaborative work, quality of service protocols and internal corporate network priorities.
Considering the costs of renewal and amortization, the SME has been reusing the same infrastructure for a multitude of equipment and sites, so that by addressing this problem in one of its workgroups, they could reuse it in the rest, reducing costs. After analyzing the devices, we saw the main bottleneck to facilitate the management and automation of internal processes: the central router.
Sometimes customers use a sledgehammer to crack nuts, but we understand the historical and monetary reasons. After all, we've all been guilty of it at one time or another. For this reason, our objective has been clear: to provide an API to a commercial managed router, as well as an associated management service to it, knowing that the manufacturer only gives a web interface to operate manually. Two other alternative solutions (much more expensive for these cases) would have been to migrate to open-source routers/firmwares, for example, using OpenWRT, or to have opted for a high-end professional router with remote management.
A few years ago we did a job analyzing router authentication portals, so we dusted off all that material and refreshed our knowledge. Unfortunately for us and this work, times change, and with it, authentication mechanisms, protocols and utilities. Of course, we find ourselves on the other side, because it should always be positive if it increases security without compromising usability and functionality. In other words, the techniques used by the TP-Link C7 Archer router are novel, and previous work cannot be reused. At least not directly.
We did a first sweep of manuals, notification lists and even official communication with TP-Link. We also went through projects we have worked with on occasion, such as OpenWRT, pfSense and their partners. After this initial exploration we can conclude that there is no substantial information about the layer we intend to overcome to satisfy our customer.
We get down to work, and begin by performing a general exploration of both the authentication process and the interaction with the panels and the information provided by the router. We see that the needs of our client can be met, because through a combination of external system software (a computer as a DMZ server always active) together with the information displayed by this router, it is possible to build a quite acceptable solution.
Once we see that the objective is possible from the functional point of view, we start to explore the implementation details of the system. Of course, also in a generalized way, as we try to understand and recognize patterns, frameworks, libraries and techniques used by the whole web application of the router.
The software has changed significantly with respect to previous versions and models we have worked with, so it has not only improved in terms of quality but also in the complexity of the internal techniques and protocols (between the frontend manager and the router's internal service). Of course, its developers have applied themselves and have been fully focused on asynchronous communication and dynamic construction. Obviously, all this makes our analysis more difficult, as well as the progressive modification of functionalities and the development-evaluation-debugging cycles to reach our goal. In short, it complicates the counter-development and injection of our code.
Since we did not intend to perform a complicated design and architectural reconstruction, we focused on having the goal relatively quickly, at the cost of sacrificing a deeper analysis, better adaptability to internal changes and greater maintainability in the face of new features. Some of the first steps we took were to build a schema of the communication protocol and the cryptographic protocols used, as well as to classify the functionalities - roughly speaking - taking into account the resources used. As an example, three cryptographic keys are used, two of which rotate over time (AES and hash variants), encapsulated as blob streams using ajax, encrypted in turn by the initial key (RSA). In addition, the authentication seed works as a session, which is invalidated in different regions and with a period of time as a limit (this is logical). Interestingly, they have implemented an SSL-like approach for the initial phase of the authentication gateway. On the other hand, they have left the HTTP communication open for management (susceptible to man-in-the-middle attacks under the same network), which we do not understand, as an intruder on the local network could escalate and gain control of the network.
This design complicates the quick tests and partial injection on the generated contents, so we had to capture traces so we could regenerate the flow later. It was a somewhat tedious process, but this way we kept a capture of all the elements used per session, being able to make modifications to the protocol and replicate the operation with the added features. After analyzing the traces and being able to perform as many inspections and executions as we saw necessary, we proceeded to perform a manual proof of concept, simulating our interactions by doing visual scripting. In this way, we speeded up the construction of injection blocks for the generated frames, and directly verified small tests on the authentication process and its different states, validating our ideas before performing any type of automation.
Once the proofs of concept with the authentication process were generated, we simulated the same behavior with the captured frames, starting to develop automatic code for different regions. Some of them are the exchange of cryptographic keys, the sending of authentication, the reception of base values or the propagation of secondary keys. Proofs of concept were performed to test dynamic conditioners to manipulate both the content and the flow of certain frames, successfully affecting the process. The conditions are states that vary and affect other subsequent states, depending on temporal restrictions and conditions, such as the validity of cryptographic exchange tokens. The objective is to prepare an easily modifiable environment in case of changes in the protocol (firmware update) or functional characteristics (customer needs and solutions). Once this "simulation" has been successfully verified, we start working on automation.
As the next step is the process of visual automation to achieve the end-to-end solution for our client, we postpone it for the second part of this article.