We scanned 50 companies last week. Not a single one got an A.
No exploitation. No authentication. No social engineering. Just passive, publicly observable data -- the same information any attacker with a browser and basic tooling can collect in under ten minutes.
The results were worse than we expected.
Out of 50 organizations -- a mix of funded startups, mid-market companies in regulated industries, and firms that have already suffered publicized breaches -- zero received a clean bill of health. Eighteen percent scored a Grade F, meaning they had critical exposures visible from the public internet. The remaining 82% landed in the B-to-C range with moderate but real issues. Nobody scored an A.
This is our first research publication as CELVEX Group. We are sharing it because these findings are not unique to these 50 companies. They represent patterns we see across the broader landscape, and most of them are fixable in an afternoon.
Methodology: What We Did and How
We selected 50 companies across three categories:
- Funded startups (Series A through C) -- companies scaling fast, likely prioritizing features over hardening
- Mid-market regulated companies -- firms in finance, healthcare, and SaaS that should have compliance-driven security programs
- Recent breach victims -- organizations that disclosed incidents in the past 12 months
Every scan was passive only. We did not authenticate to any system, submit any forms, exploit any vulnerability, or access any protected resource. Our methodology included:
- DNS record enumeration (MX, TXT, SPF, DMARC, DKIM)
- HTTP response header analysis
- TLS certificate inspection
- Public file and path discovery via standard HTTP GET requests
- Technology fingerprinting from publicly served content
This is the same surface-level reconnaissance that precedes virtually every targeted attack. The difference is that we documented it systematically and graded it.
Trust statement: We identified these issues but did not access any file contents. When we detected the presence of exposed files such as.git/configor.env, we confirmed their existence via HTTP response codes and headers only. We did not download, read, or store their contents.
The Numbers
| Grade | Count | Percentage | Definition |
|---|---|---|---|
| A | 0 | 0% | No significant findings |
| B | 24 | 48% | Minor misconfigurations, low immediate risk |
| C | 17 | 34% | Moderate issues, exploitable under the right conditions |
| F | 9 | 18% | Critical exposure -- active risk to the organization |
Finding category breakdown across all 50 companies:
| Finding | Prevalence | Severity |
|---|---|---|
| Missing or misconfigured DMARC | 100% of Grade F companies | High |
| Missing security headers (CSP, HSTS, X-Frame-Options) | Widespread across all grades | Medium |
Exposed .git/config | Multiple companies | Critical |
Exposed .env files | Multiple companies | Critical |
| Server version disclosure | Common across B and C grades | Low-Medium |
Every Grade F company had a missing DMARC record. That alone tells a story about baseline hygiene.
The Top 5 Findings
1. Missing DMARC Records -- Found on 100% of Grade F Companies
What it is: DMARC (Domain-based Message Authentication, Reporting and Conformance) tells receiving mail servers what to do when an email fails SPF or DKIM checks. Without it, anyone can send email that appears to come from your domain.
Why it matters: Domain spoofing is the foundation of business email compromise (BEC). An attacker does not need to hack your mail server -- they just send email as you. BEC accounted for over $2.9 billion in reported losses in 2023 according to the FBI's IC3 report, and that figure only includes what gets reported.
Every single Grade F company in our scan had no DMARC record. Not a permissive one. Not a monitoring-only one. None at all.
How to fix it:
Start with a monitoring-only policy to avoid disrupting legitimate email flows:
_dmarc.yourdomain.com TXT "v=DMARC1; p=none; rua=mailto:dmarc-reports@yourdomain.com"
Collect reports for 2-4 weeks, verify that all legitimate sending sources pass SPF/DKIM, then move to p=quarantine and eventually p=reject. This is a DNS TXT record change -- it costs nothing and takes five minutes to deploy.
2. Exposed .git/config Files -- Source Code Leakage
What it is: When a .git directory is accessible over HTTP, attackers can reconstruct your entire source code repository. The .git/config file alone reveals repository URLs, branch names, and sometimes credentials for private repositories.
Why it matters: Source code access gives attackers a complete map of your application logic, hardcoded secrets, API endpoints, internal service architecture, and known vulnerabilities in your dependencies. It transforms a black-box target into a white-box one.
We found multiple companies serving .git/config on production domains. A 200 response on /.git/config means the entire .git directory is likely browsable.
How to fix it:
Block access at the web server level. For Nginx:
location ~ /\.git {
deny all;
return 404;
}
For Apache:
<DirectoryMatch "^/.*/\.git/">
Require all denied
</DirectoryMatch>
Then audit your deployment pipeline. The .git directory should never be included in production artifacts. If you are using container-based deployments, ensure your Dockerfile copies application files explicitly rather than the entire build context.
3. Exposed .env Files -- Credential Exposure
What it is: .env files typically contain environment variables: database connection strings, API keys, SMTP credentials, third-party service tokens, and encryption secrets.
Why it matters: A single exposed .env file can contain enough credentials to compromise an entire application stack. Database credentials lead to data exfiltration. API keys lead to lateral movement into third-party services. SMTP credentials lead back to the email spoofing problem we covered above -- except now the attacker is sending from your actual mail infrastructure.
How to fix it:
Same web server blocks as above -- deny access to all dotfiles:
location ~ /\. {
deny all;
return 404;
}
Beyond blocking access, rotate every credential that was in any .env file served from that host. Assume compromise. Then move secrets into a dedicated secrets manager (AWS Secrets Manager, HashiCorp Vault, Doppler, or similar) rather than flat files in your web root.
4. Missing Security Headers
What it is: HTTP security headers instruct browsers to enable built-in protections. The most impactful ones we checked for:
- Content-Security-Policy (CSP): Controls which resources the browser is allowed to load, mitigating cross-site scripting (XSS)
- Strict-Transport-Security (HSTS): Forces HTTPS connections, preventing protocol downgrade attacks
- X-Frame-Options: Prevents your pages from being embedded in iframes, mitigating clickjacking
How to fix it:
Start with these baseline headers and refine from there:
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Referrer-Policy: strict-origin-when-cross-origin
Content-Security-Policy: default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'
CSP requires the most tuning -- deploy it in report-only mode first (Content-Security-Policy-Report-Only) and monitor for violations before enforcing.
5. Server Version Disclosure
What it is: Web servers, application frameworks, and middleware that include version numbers in HTTP response headers or error pages. Examples: Server: Apache/2.4.49, X-Powered-By: PHP/7.4.3.
Why it matters: Version disclosure by itself is low severity. But it eliminates guesswork for attackers. If your server header says Apache/2.4.49, an attacker immediately knows to try CVE-2021-41773 (path traversal and RCE). Without that version string, they either have to test blindly or move on to an easier target.
How to fix it:
For Nginx, add to your http block:
server_tokens off;
For Apache:
ServerTokens Prod
ServerSignature Off
Remove X-Powered-By headers at the application level. In Express.js: app.disable('x-powered-by'). In PHP, set expose_php = Off in php.ini.
What Attackers Do With This
These findings are not theoretical. Here is how a real attack chain builds from passive reconnaissance alone:
Step 1: Recon. The attacker runs the same passive scans we did. They find a missing DMARC record and an exposed .env file on a staging subdomain.
Step 2: Credential harvest. The .env file contains SMTP credentials and a database connection string. The attacker now has authenticated access to the company's email-sending infrastructure and potentially a database.
Step 3: Phishing from a trusted domain. Using the SMTP credentials (or simply spoofing the domain due to missing DMARC), the attacker sends targeted phishing emails to employees and customers. These emails pass SPF because they are coming from legitimate infrastructure.
Step 4: Lateral movement. Credentials from the .env file are reused across services. The database connection string leads to customer data. API keys provide access to payment processors, cloud storage, or internal tools.
Step 5: Persistence. With source code from the exposed .git directory, the attacker understands the application well enough to establish persistent access -- backdoors that blend in with legitimate code patterns.
None of this required a zero-day. None of it required sophisticated tooling. Every step used information that was publicly accessible.
The Iceberg Problem
Everything in this report was found passively -- from the outside, without authentication, without touching anything that required permission.
This is the tip of the iceberg.
Passive scanning reveals misconfigurations in what is publicly exposed. It cannot see:
- Authentication and authorization flaws -- broken access controls, IDOR vulnerabilities, privilege escalation paths
- Business logic errors -- payment manipulation, race conditions, workflow bypasses
- Internal API security -- unauthenticated internal endpoints, GraphQL introspection leaks, excessive data exposure
- Supply chain risks -- vulnerable dependencies, compromised packages, outdated runtime environments
- Cloud misconfigurations -- overly permissive IAM roles, public S3 buckets behind authenticated endpoints, misconfigured serverless functions
If 100% of the companies we scanned had issues visible from the public internet, the findings behind the login page are statistically certain to be worse. Authenticated testing consistently reveals an order of magnitude more issues than passive scanning alone.
The companies that scored Grade F on a passive scan are not just exposed to opportunistic attackers. They are exposed to anyone who decides to look.
How to Check Yourself
We built a free external scan tool that runs the same passive checks described in this post against your domain. It produces a security grade and a prioritized list of findings with remediation guidance.
The scan takes 5-10 minutes. No signup is required to receive the summary report. You will get:
- An overall security grade (A through F)
- Categorized findings with severity ratings
- Specific remediation steps for each issue
- A comparison against the benchmarks from this research
If your results concern you -- or if you want to see what is below the waterline -- we offer authenticated penetration testing and continuous attack surface monitoring. But start with the free scan. Know where you stand.
Find out where your company stands
Run the same passive scan we used in this research against your own domain. Free, no signup required.
Scan Your Domain FreeSources
- FBI IC3 2023 Internet Crime Report -- BEC loss statistics ($2.9B+)
- MDN: Content-Security-Policy
- DMARC.org -- DMARC specification and implementation guides
- SecurityHeaders.com -- HTTP security header analysis tool
CELVEX Group is a cybersecurity research and offensive security firm built with precision in North America. We combine automated reconnaissance with expert manual testing to find what scanners miss.