Security and Compliance
How is your application security testing performed? Internal, third parties or both? If so, what is the cadence? Explain your methodology
Our application currently undergoes standard manual QA testing by our internal team, which includes a structured security checklist focused on common vulnerabilities such as broken authentication, input validation (XSS, SQL injection), session management, access control, and secure error handling.
In addition to this, we use Cloudflare to enforce HTTPS, protect against DDoS attacks, apply rate limiting, and enable web application firewall (WAF) rules — all of which help mitigate a wide range of external threats.
At this stage, we have not yet implemented formal automated security testing or third-party audits. However, we are in the process of evaluating tools like Snyk (for dependency scanning) and OWASP ZAP (for basic vulnerability scanning) for integration into our CI/CD pipeline. These will allow us to automatically identify vulnerabilities on each deployment.
As we scale, we plan to engage a third-party security firm for deeper assessments, including penetration testing and secure code reviews. Security is a growing priority for us, and we are committed to maturing our processes as our product and customer base grow.
Please summarise or attach your network vulnerability management processes and procedures?
asfd
We have a structured process in place for identifying, assessing, and addressing network and host-level vulnerabilities within our infrastructure.
Vulnerability Scanning:
We use Google Cloud Security Command Center (SCC) to perform regular network and host vulnerability scans. These scans are conducted monthly to identify misconfigurations, exposed services, and known vulnerabilities across our infrastructure, including GCP-managed services and Kubernetes (GKE) nodes.Threat Intelligence & Monitoring:
We rely on integrated security feeds and alerts from GCP SCC, Cloudflare, and Atatus to stay aware of vulnerabilities relevant to our environment. These tools provide continuous monitoring for new threat vectors and suspicious activity, especially at the network and application layers.Review & Mitigation:
Identified vulnerabilities are triaged and reviewed by multiple team members, including engineers and release managers, to determine appropriate remediation. Patching decisions are prioritized based on severity, exploitability, and impact on production workloads.Tracking & Accountability:
We maintain an internal tool, Db Dash, to track, manage, and resolve vulnerabilities. This ensures visibility into the status of each issue and accountability for timely remediation.Patch Management:
While we do not currently use an automated patching system, all patches related to vulnerabilities identified through scans or alerts are manually assessed and applied as needed, with peer review and regression testing in our dedicated testing environment before production deployment.
This process ensures we proactively identify and manage risks in our cloud-hosted and containerized infrastructure, while also maintaining operational stability and compliance with basic security hygiene
What tools do you use for vulnerability managment?
We host our infrastructure on Google Cloud Platform (GCP) and run our application on Kubernetes. Our network is protected by Cloudflare, which provides WAF, HTTPS enforcement, DDoS protection, rate limiting, and other critical security controls at the edge.
For vulnerability management, we currently address the most critical and basic risks as follows:
Cloudflare handles edge-layer protections including HTTPS enforcement, TLS configuration, bot mitigation, and web application firewall (WAF) rules.
GCP provides infrastructure security baselines, IAM management, and firewall configuration. We rely on default secure configurations and restrict public access to our cloud services.
We are evaluating the use of the following tools to improve host- and container-level vulnerability management:
Google Cloud Security Command Center (SCC) — for infrastructure and misconfiguration alerts
Trivy — for container image and Kubernetes node vulnerability scanning
kube-bench — to ensure our Kubernetes cluster complies with CIS security benchmarks
Nmap — for periodic public port and network surface scans
For dependency security and static analysis, we are exploring Snyk and GitHub Dependabot to scan for known vulnerabilities in open-source packages.
SHEET
Application Vulnerability Management
Please summarise or attach your application vulnerability management processes and procedures?
At Viasocket, our vulnerability management approach prioritizes protection at the network, application, and code levels, leveraging leading cloud and security platforms. Our process includes:
Perimeter Protection: We use Cloudflare WAF to mitigate critical vulnerabilities such as SQL Injection, Cross-Site Scripting (XSS), and Distributed Denial-of-Service (DDoS) attacks. Rate limiting and IP reputation controls are enabled to defend against abusive traffic.
Cloud Security: Our backend runs on Google Cloud Platform (GCP) in Google Kubernetes Engine (GKE), with no public internal IPs, VPC isolation, and IAM role management. GCP Security Command Center flags misconfigurations or security risks.
Authentication: We use OAuth for secure authentication, and all API communication is encrypted over HTTPS.
Monitoring: Atatus, Cloudflare, and GCP tools provide real-time performance, error, and security monitoring, helping us detect anomalies and investigate security issues quickly.
Incident Response: We monitor runtime systems continuously and act based on predefined alerting rules. Our team uses GCP and Atatus logs to respond to suspicious behavior or security events.
Ongoing Improvements:
We are currently integrating automated tools to improve our handling of:
Vulnerable dependencies (SCA) – tools like Snyk or Trivy
Static code vulnerabilities (SAST) – tools like Semgrep
These upgrades will ensure we catch vulnerabilities early in development and enhance compliance with common industry expectations.
What tools do you use for application vulnerability management?
✅ Currently in Use:
Tool | Purpose |
---|---|
Cloudflare (WAF) | Blocks SQLi, XSS, DDoS, bot attacks |
GCP Security Center | Monitors for misconfigurations, IAM violations |
GKE (Kubernetes) | Isolated container environments, no public internal IPs |
Atatus | Application monitoring, alerting, runtime issue tracking |
🔧 In Progress / Planned:
Tool | Purpose |
---|---|
Snyk / Trivy | Scan open-source dependencies for known CVEs (SCA) |
Semgrep | Scan source code for injection flaws, insecure patterns (SAST) |
Gitleaks / TruffleHog | Detect hardcoded secrets/tokens in Git history |
Production Patching
How do you regularly evaluate patches and updates for your infrastructure?
Viasocket follows a structured and proactive patching process to ensure security and reliability in our production environment:
Library & Dependency Updates: We regularly review and update the libraries and frameworks used in our codebase. Critical security patches are prioritized and deployed promptly.
Environment Separation: We maintain separate testing and production environments. All patches and updates are first applied to the testing environment, where we perform regression and stability testing before production deployment.
Containerized Deployments: Our backend services run on Google Kubernetes Engine (GKE). Application containers are rebuilt regularly using updated base images and redeployed via CI/CD pipelines.
CI/CD Rollouts: Deployments use a controlled CI/CD pipeline with support for rolling updates and rollback mechanisms, minimizing risk during patching.
Cloud Infrastructure: Underlying infrastructure patches are automatically managed by Google Cloud Platform (for compute resources) and Cloudflare (for edge and WAF services), ensuring timely security updates at the OS and network layers.
This combination of automated patching, version control, and safe deployment practices ensures that we consistently address critical vulnerabilities with minimal impact to production operations.
Are all endpoint laptops that connect directly to production networks centrally managed? At Viasocket, all employees operate from a secure in-office environment. Each team member is responsible for managing their own workstation. While we do not currently use a centralized endpoint management solution, access to production systems is strictly limited and controlled through secure methods. Workstations do not connect directly to production infrastructure. All access is mediated through secure cloud environments (GCP/GKE) and is gated via SSH keys, VPNs, and role-based permissions. |
Describe standard employee issued device security configuration/features. (Login Password, antimalware, Full Disk Encryption, Administrative Privileges, Firewall, Auto-lock, etc.) Although device configurations are not centrally enforced, we maintain internal standards and encourage all employees to follow security best practices, including:
Employees are also trained on general security hygiene and safe software development practices. All development and operations workflows occur within secured environments, such as GCP-hosted containers, ensuring minimal reliance on local execution or sensitive local storage. |
Does sensitive or private data ever reside on endpoint devices? How is this policy enforced? We maintain a strict policy that no sensitive or private customer data is stored on endpoint devices. All sensitive operations are conducted through secure cloud infrastructure, and customer data remains encrypted and contained within GCP-managed services.
This policy is enforced through technical design — we architect systems to never expose sensitive data at the endpoint level. Combined with secure defaults in our cloud infrastructure and clear internal guidelines, this ensures that the risk of endpoint-based data exposure is effectively mitigated. |
Config Management |
Are the hosts where the service is running uniformly configured? Yes, all hosts and containers running Viasocket services are configured uniformly. Our infrastructure is built on Google Kubernetes Engine (GKE), which allows us to manage workloads using standardized container images and version-controlled configuration files. Each deployment adheres to a defined baseline configuration, ensuring consistency across environments (testing and production). Infrastructure definitions, environment variables, and runtime parameters are maintained in source control and follow Infrastructure-as-Code (IaC) principles to minimize drift and misconfiguration. |
Are changes to the production environment reviewed by at least two engineers/operations staff? Yes, all changes to the production environment are reviewed by at least two team members. We have dedicated release managers who review and approve every deployment after peer code review, ensuring proper oversight before any changes are applied to production. |
Secrets Management |
Describe your secrets management strategy:(auth tokens, passwords, API credentials, certificates) At Viasocket, we follow a secure and centralized approach to managing secrets such as authentication tokens, API keys, passwords, and certificates. Our key practices include:
This approach minimizes exposure, enforces control, and ensures secure handling of all sensitive credentials across our infrastructure. |
Logs |
Are all security events (authentication events, SSH session commands, privilege elevations) in production logged? Yes, all critical security events in our production environment are comprehensively logged and monitored. This includes:
Logs are securely stored, timestamped, and retained according to our internal policy. They are regularly reviewed and used for security monitoring, incident response, and compliance verification. Anomalies and suspicious events are flagged through our observability stack, which includes Atatus, Cloudflare, and GCP-native monitoring tools. |
Network Security |
Is the production network segmented in to different zones based on security levels? Yes, our production environment is hosted within a Google Cloud VPC, which provides a secure, isolated network environment. While we use a single VPC for both testing and production, service.s are logically separated and access is tightly controlled through firewall rules, IAM policies, and namespace-level isolation within Google Kubernetes Engine (GKE). No internal APIs, databases, or backend services are publicly exposed. All such components are assigned private IP addresses only, and communication is restricted within the cluster or VPC using Kubernetes network policies and GCP firewall rules, ensuring secure, segmented access even within a shared network |
What is the process for making changes to network configuration? All network configuration changes (such as updates to VPC rules, firewall settings, or IP access control lists) are performed manually but undergo multiple layers of review before implementation. Changes are reviewed by relevant engineers and release managers, ensuring that no modifications are applied without proper oversight and risk assessment. This review process ensures network changes align with our security and operational standards. |
Is all network traffic over public networks to the production infrastructure sent over cryptographically sound encrypted connections? (TLS, VPN, IPSEC, etc). If there are plaintext connections, what is sent unencrypted?
|
Cryptographic Design |
What cryptographic frameworks are used to secure data in transit over public networks? |
What cryptographic frameworks are used to secure data at rest? We use Google Cloud's default encryption at rest, which leverages AES-256 encryption for all data stored on disks, databases, and cloud-managed services (such as GKE, Cloud Storage, Cloud SQL, etc.). For additional protection, sensitive user information stored within our databases is explicitly encrypted at the application level using AES-256, ensuring double-layer protection beyond the infrastructure defaults. |
What cryptographic frameworks are used to store passwords? We support multiple authentication methods, including Google OAuth and traditional email/password login. For users authenticating via email and password, we ensure password security by applying industry-standard cryptographic hashing and salting techniques using trusted libraries within the Node.js Beyond password protection, all sensitive user data stored in our databases is encrypted at rest using AES-256 encryption, providing a robust layer of security for confidential information. This combined approach ensures strong security controls around user credentials and sensitive data, leveraging both secure external authentication providers and best-practice cryptographic safeguards internally. |
Are any custom crypographic frameworks/implementations used? If so, have any custom cryptographic frameworks been reviewed by an independent 3rd party? |
Key Management |
How are crytographic keys(key management system, etc) managed within your system? We manage cryptographic keys and secrets using Google Cloud Secret Manager, which provides secure storage and access control for all sensitive keys, tokens, and credentials. Access to secrets is tightly controlled using Google Cloud IAM policies, ensuring that only a very limited number of authorized personnel can retrieve or manage these keys. All access to secrets is logged and audited, enabling traceability and accountability for key usage. While we currently do not have an automated key rotation policy, we rely on Google Cloud Secret Manager’s secure infrastructure to protect keys at rest and in transit. We do not use hardware security modules (HSMs) at this time but benefit from Google Cloud’s managed security features and best practices for key protection, including encrypted storage and secure access controls. |
How do you keep aware of potential security vulnerabilities and threats that may affect your service? We rely on the security features and threat intelligence provided by our cloud provider (GCP) and security partners such as Cloudflare, which proactively identify emerging threats and vulnerabilities. We maintain ongoing awareness of potential security vulnerabilities and threats through continuous monitoring and alerting. |
How do you log and alert on relevant security events? (this includes the network and application layer)? Our logging and alerting framework spans both application and network layers. We use Atatus for detailed application-layer monitoring, performance tracking, and security event logging. At the network layer, we rely on Cloudflare to monitor traffic, block threats, and log events through its Web Application Firewall (WAF) and rate limiting features. Logs from both systems are centrally aggregated and monitored. Alerts generated from security events, such as suspicious traffic patterns or application errors, are configured to notify our security and operations teams via Slack and email, enabling timely investigation and response. This dual-layer monitoring ensures comprehensive coverage of security events across our infrastructure. |
Describe or attach your Security Incident Response Program? Our Security Incident Response Program is designed to ensure timely detection, containment, and remediation of security incidents to minimize impact on our services and customers. The program includes:
|
How is your Incident Response Plan tested? Include cadence. We test our Incident Response Plan through periodic tabletop exercises and simulated scenarios involving key team members from security, operations, and development. These exercises occur at least bi-annually and are designed to validate the effectiveness of our procedures, communication, and coordination under realistic conditions. Additionally, we review and update the plan after any significant incident or change to our infrastructure to ensure it remains current and effective. |
Do you have a formal service level agreement (SLA) for incident response? https://viasocket.com/faq/others/service-level-agreement-sla |
Do you have formally defined criteria for notifying a client during an incident that might impact the security of their data or systems? What are your SLAs for notification? Yes, we have a defined approach for notifying clients in the event of a security incident that could potentially impact their data or systems.
This process complements our broader SLA commitments around uptime, maintenance, and operational transparency, and reflects our priority to protect user data and maintain trust. |
How do you limit data exfiltration from production endpoint devices? Data exfiltration from production environments is tightly controlled. SSH access to production servers is restricted through IAM-based access control, and only a very limited set of authorized engineers are granted permission. All access is logged and monitored. Additionally, production environments are configured to disallow file extraction or external data transfers, and outbound internet access is disabled by default where not explicitly required. These measures collectively ensure that data movement from production systems is tightly regulated. |
What systems do you have in place that mitigate classes of web application vulnerabilities? (e.g.: WAF, proxies, etc) We use Cloudflare Web Application Firewall (WAF), which is fully configured to block a wide range of web application vulnerabilities, including OWASP Top 10 threats such as SQL injection, XSS, and RCE. In addition, we utilize Cloudflare rate limiting, bot protection, and custom firewall rules to prevent abuse and reduce the attack surface. All public traffic to the application flows through Cloudflare, ensuring comprehensive edge-level protection. |
Do you have breach detection sytems and/or anomaly detection with alerting? Yes, we use Cloudflare and Atatus for breach and anomaly detection. These tools monitor both network-level traffic and application-level behavior. We have alerting configured via email, which notifies the team of any abnormal activities such as suspicious login attempts, traffic spikes, or access pattern anomalies. These alerts allow for a timely investigation and response to potential incidents. |