Indian government and defence agencies are once again in the crosshairs of Pakistan-linked threat actors. Security researchers have uncovered a new wave of espionage campaigns designed to infiltrate critical departments, steal credentials, and establish persistent backdoors.
How the Campaign Operates
- Initial Access: Spear-phishing emails and weaponised shortcut (LNK) files disguised as internal communications.
- Payload Chain: Downloaders launch PowerShell scripts and DLL implants, executed in memory to avoid detection.
- Infrastructure: Attackers rely on bulletproof hosting services, making takedown and attribution difficult.
- Target Platforms: Both Windows and BOSS (Bharat Operating System Solutions) environments have been targeted—expanding attacker reach across government systems.
Impact & Strategic Context
- Espionage over destruction: The campaigns are focused on data theft, surveillance, and intelligence gathering rather than immediate disruption.
- Persistent adversaries: The tactics align with APT36 (aka Transparent Tribe), a Pakistan-linked actor with a long history of targeting India’s government and defence ecosystem.
- Trust exploitation: By impersonating internal departments like HR or Finance, attackers are weaponising human trust to bypass technical defences.
Defence Recommendations
- Strengthen Email Security – Enforce SPF, DKIM, DMARC; simulate phishing for high-risk roles.
- Enforce Credential Hygiene – MFA everywhere, monitor unusual login patterns.
- Limit Lateral Movement – Use strict segmentation between admin, IT, and sensitive data networks.
- Boost Endpoint Visibility – Detect anomalous PowerShell usage, suspicious DLL activity, and memory-resident behaviour.
- Proactive Threat Hunting – Look for attacker tool patterns and share findings across CERTs and security communities.
- Incident Readiness – Equip teams to capture memory artefacts; prepare clear comms to minimise fallout.
AI-Based Social Engineering Attacks
What’s Happening?
AI tools are helping attackers:
- Craft hyper-personalised messages
- Mimic voices & faces with deepfakes
- Automate research on targets
- Scale scams across email, SMS, calls & social
Top Risks
Hyper-Personalised Phishing
- Risk: AI crafts natural emails tailored to your role or recent activity.
- Example: You get an email congratulating you on a product launch (info scraped from LinkedIn) with a link to a “press mention” — it’s a phishing page.
Deepfake CEO/Vendor Fraud
- Risk: Voice/video AI mimics leaders or vendors to push urgent actions.
- Example: A finance employee receives a call in the “CEO’s voice” asking for an immediate transfer to a supplier.
Automated Reconnaissance
- Risk: AI scrapes the web for personal & company info to craft convincing lures.
- Example: An attacker references your recent conference talk and uses your company’s invoice template in a fake payment request.
Multi-Channel Attacks
- Risk: Attackers combine email, calls, and chat apps to build credibility.
- Example: After an email “from IT support,” you also get a WhatsApp message reminding you to reset your password via a link.
Credential Harvesting via Chatbots
- Risk: AI chatbots pose as helpdesks or login portals to trick users.
- Example: A fake “Microsoft support bot” asks you to confirm your MFA code to “restore access.”
How to Stay Safe
Verify out-of-band — Always confirm unusual requests via a separate channel. ✔️ Strong MFA — Use phishing-resistant authentication (not just SMS). ✔️ Finance controls — Require dual approval for transfers & payroll changes. ✔️ Awareness training — Teach teams to spot deepfakes & AI-crafted phishing. ✔️ Limit exposure — Reduce sensitive info shared in public profiles/posts.
Key Takeaway
AI makes social engineering smarter, faster, and harder to spot. Your best defence: verify, educate, and secure processes.