DataTribe Insights - Q1 2023 - Let's Ask ChatGPT What to Do About SVB

The DataTribe Team

Introduction

The first quarter of 2023 was not boring. There were some notable surprises, but in the end, the quarter ended largely as expected. Here are a few highlights:

  • SVB, a pillar of the innovation ecosystem, unexpectedly ran into a liquidity crisis. In addition to causing a massive scramble among startups and investors to manage cash flow, the episode raised awareness of potential weaknesses in smaller regional banks. This wobble in the financial system added to the economic headwinds that were already blowing. Most startups and VCs have taken the stability of their bank for granted. That era has gone. The SVB crisis provided a crash course to the startup community in treasury management. We anticipate that formal cash management policies and redundant banking relationships will be standard for even the smallest startups. As well, terms around lines of credit requiring companies to keep most of their deposits in the lender’s bank will likely evolve to provide companies with more flexibility.
  • While ChatGPT came out in November of 2022, it was in this past quarter that ChatGPT really started to accelerate and capture imaginations. In the quarter, we saw ChatGPT-4 launch, a large language model capable of passing the bar exam in the top decile (keep reading for implications and opportunities in cybersecurity). Both Microsoft and Google launched AI-powered search assistants. Very disconcerting was the leak into the wild of Meta’s large language model (LLM) named LLaMA. Unlike OpenAI and Google’s LLMs which require large compute resources to function, LLaMA can operate on a laptop. The genie is out of the bottle.
  • The Biden Administration released an update to the National Cybersecurity Strategy. The strategy provides a comprehensive overview of key cyber challenges that confront us and presents some bold ideas for improving security.
  • The Federal Reserve continued to raise interest rates, venture capital investment is at its slowest pace in a decade, and private company valuations continue to decline.

On the surface, there is a lot of negative economic news, but a deeper dive yields some optimism. We are in the throes of an arguably overdue correction in overheated tech markets. While the venture finance market has slowed, we continue to see an abundance of innovation from brilliant founders – both within and beyond the DataTribe portfolio. The rapid advance of artificial intelligence carries risk, but also abundant opportunity. Seed valuations in cybersecurity remain near all-time highs, suggesting a broadly growth-oriented outlook for the sector.

In this quarter’s installment of DataTribe Insights, we review the latest market trends, discuss the merits of key sections from the National Cybersecurity Strategy, evaluate the inherent risks and opportunities of artificial intelligence through a cybersecurity lens, and explore a new idea for security-by-design infrastructure.

Q1 Cybersecurity Deal Activity

The first quarter of 2023 marks the close of a grim quarter for most founders raising venture capital. U.S. Cybersecurity deal activity[1] in the quarter was at or near decade lows from seed (21 in Q1 2023 vs. 20 in Q1 2015) to Series E. Year-over-year, cybersecurity seed deal volume was down 56% (from 48 to 21) and down 50% (42 to 21) from the previous quarter. The broader U.S. Venture Capital ecosystem marked similar low points, albeit with a sharper decline than what we observed in cybersecurity. Valuations remain compressed at all stages except seed, with the notable exception being the $300M Series D raised by Wiz at a 54x revenue multiple and a $10 billion pre-money valuation.

 

Figure 1 - Quarterly Deal Volumes - Seed through B
Figure 2 - Q1 Deal Volumes - Seed through B

Kudos to the Wiz team on their phenomenal success. But what about the rest of the venture market, particularly for cybersecurity companies? The seed-stage cybersecurity market remains a relatively bright spot with the median pre-money valuation at $15.5M, eclipsed only by the all-time high of $15.8M observed in Q4 2022. The median cybersecurity seed round size also marked a new all-time high of $4.5M in the quarter. These two data points are on the back of the slowest seed investment pace of the last decade, and we have a new level of concentration in the cybersecurity innovation ecosystem. Fewer companies receiving more funding at higher valuations is likely a good thing for the sector, particularly the enterprise CISO which is already overwhelmed with vendors trying to sell the latest product.

But what of the other venture-backed companies in the cybersecurity ecosystem? Where are they and where are they going? Q1 2023 marked the lowest cybersecurity M&A activity in ten years. Conversely, the number of cybersecurity companies reported as “Out of Business” in Q1 2023 is well above historical norms and nearing an all-time high. The collapse of Silicon Valley Bank highlighted, among other things, the unexpectedly high cash burn of venture-backed companies. Without a sudden and drastic change in venture markets, the market will continue to reset through layoffs and company failures in the coming months as venture-backed companies either fail to raise new funds or to achieve cash-neutral operations.  

Figure 3 - Median Pre-Money Valuations: Cybersecurity vs. All Verticals - U.S. Seed, Series A, and Series B

Biden’s National Cybersecurity Strategy Presents a Provocative Idea

On March 2, 2023, the Biden administration announced an update to the National Cybersecurity Strategy.  It’s a good read. The document provides a comprehensive and thoughtful discussion of key cybersecurity risks confronting us in a digital-first world. For those who may not had the opportunity to check it out, here are a few highlights:

  • What could easily have run into the hundreds of pages weighs in at a tight 39 pages
  • The document provides a good discussion of the current environment and context:
    • Emerging trends
    • Malicious actors
    • Recap of existing policies and executive orders the strategy builds on
  • There’s a focus on five pillars centered around a path to resilience – each pillar with numerous subsections:
    • Defending Critical Infrastructure
    • Disrupt and Dismantle Threat Actors
    • Shape Market Forces to Drive Security and Resilience
    • Invest in a Resilient Future
    • Foreign International Partnerships to Pursue Shared Goals
  • It closes with a brief (bit hand-wavy) thoughts on implementation:
    • In fairness, this document focuses on articulating priorities and not trying to prescribe the “how.”

In the Biden strategy, there was a big idea that jumped out to us:

“Shift liability for insecure software products and services. We must begin to shift liability onto the entities that fail to take reasonable precautions to secure their software while recognizing that even the most advanced software programs cannot prevent all vulnerabilities. Companies that make software must have the freedom to innovate, but they must also be held liable when they fail to live up to the duty of care they owe customers, businesses, and critical infrastructure providers.”

What does that mean?  How would that work?  Do we want the government meddling in one of the most vibrant parts of the economy?

The foundational thinking behind this idea of shifting liability has been percolating within the Biden team for a while. In February 2022, Chris Inglis, Nation Cyber Director, co-authored a piece in Foreign Affairs called “The Cyber Social Contract.” In it, Inglis argues, “Those more capable of carrying the load—such as governments and large firms—must take on some of the burden, and collective, collaborative defense needs to replace atomized and divided efforts… A durable solution must involve moving away from the tendency to charge isolated individuals, small businesses, and local governments with shouldering absurd levels of risk.”

We agree that there’s a true problem here and that it is probably a place where some type of regulatory nudge is necessary to shift incentives to redirect market forces toward shipping products that are secure-by-design. The fact that there are over 180K vulnerabilities in MITRE’s common vulnerabilities and exposures (CVE) database underscores just how buggy our software products are. Product incessantly nagging you to patch doesn’t scale in a world where the average U.S. household has 25 digital devices. And then blaming the customer for not patching if there’s a security issue does seem like dodging responsibility. There’s got to be a better way.

Let’s face it. Building more security into products is expensive and slows time to market. Further, additional security isn’t something most mainstream customers pay much attention to no less are willing to pay more for. So, to pursue secure-by-design software is in many ways to put yourself at a disadvantage in today’s market. Your competitors will just cut corners and ship faster at a lower cost. Until customers appreciate and pay for security (which may never happen), it’s really hard to make the business case for significant investment in it. It’s a race to the bottom. And that’s where penalizing software makers for not adhering to reasonable standards will help change the investment calculus in security, hopefully reducing the number of vulnerabilities and need for patching.

For over sixty years, software has been left largely alone to flourish unrestrained by regulatory burdens. And flourish it has. Software has truly eaten the world becoming critical to modern life. We have reached a point now where the software needs to live up to the responsibility it carries.

It took over 100 years of epic legislative battles to arrive at a place where we can eat a Twinkie or take NyQuil and trust we won’t be poisoned. Software, while complex, is no more complex or woven into our lives than food and drugs. Dragging the automotive industry into embracing seat belts and airbags took decades. Putting guardrails around society-level innovations (that customers don’t understand well enough to create healthy market forces) is really hard. But it’s something we have tackled before. It can be done. The trick will be to do so while minimizing the inevitable drag on innovation and the tendency for regulations to favor larger market incumbents.

How this all plays out is hard to predict. We’re just at the beginning of a long journey. What we can say is that the more that companies take software product security seriously (promoting security awareness among product teams, shifting security leftward in their software development lifecycle, incorporating rigorous AppSec and DevSecOps practices, carefully evaluating their software supply chain and software components they use, and completing security certifications such as SOC2), the better poised they will be for the secure-by-design future that is coming. Apart from regulatory pressure on the horizon, delivering more secure software is just the right thing to do for your customers.

ChatGPT Is the Fastest Online Application In History to Reach One Million Users

AI has come a long way in the last 18 months with tools, like OpenAI (ChatGPT creator) and Stability AI, producing writing, images, and videos that are human-like. ChatGPT-3 release is the fastest online application to reach 1 million users.  It took Netflix 3.5 years to reach 1 million users and Facebook only 10 months. It took 5 days for ChatGPT-3 to hit one million users and it reached over 100 million users by the end of 2022.

ChatGPT-3 is part of a broader category of Generative AI which, like other forms of AI, learns how to operate on understanding previous data.  Generative AI produces entirely new material as it learns from text, images, videos, and audio provided during training. With an uncanny ability to generate human-like text, images, and videos, Generative AI is poised to change software experiences for everyone. Compared to earlier AI, Generative AI shows dramatically more promise, potential value creation, and impact. In a recent interview, Microsoft founder Bill Gates said, “AI like ChatGPT will change our world and make it more efficient. It has meaningful opportunities to improve outcomes and efficiency in the office, in health care, and in education.”

Big tech is all in with Microsoft investing an unprecedented $10 billion in Open AI, Google launching Bart, and Amazon recently launching Bedrock. Microsoft has integrated ChatGPT into its products from its Bing search engine to Microsoft Office; GitHub to the recently announced Security Co-pilot.

In the investing world, startups are raising money on the latest Generative AI use cases. Per Pitchbook, $1.7 billion was generated across 46 deals in Q1 2023 compared to $1.37 billion across 78 deals in all of 2022. The new wave of AI appears to have staying power.

The Next Nigerian Prince

Also showing staying power is one of the internet’s oldest running frauds, unsolicited emails from someone claiming to be a foreign dignitary or executive promising a share of a fortune if you only provide some credentials or send money.  The first “Nigerian Prince “scam occurred 100 years ago when Prince Bil Morrison wanted some American pen pals. His touching letter was published for free in numerous newspapers around the country.  Soon the letters turned to requests for small amounts of money in exchange for “useless” baubles such as ivory tusks and precious stones.  The prince actually turned out to be a 14-year-old boy in America.  The authorities were never able to prosecute him because of his age but he started a technique that still exists today.  This technique is called an “advance fee scam” and is also called a 419 scam, referring to the section number of the Nigerian Criminal Code dealing with fraud. According to the FBI, millions are still lost to this scam, but it has slowed down due to obviously poor grammar used in the emails. Using ChatCPT, criminals can compose perfect, acceptable emails for credential phishing and send-me-money schemes.

But this goes beyond well-written emails. Bad actors can easily create believable audio and video and support it with realistic fake identities and documents. Vall-E can match the voice and mannerisms of someone based on just three seconds of audio recording of their voice.

Using ChatGPT, people who don’t know how to code can now create malware. A cyber researcher, who does not code, used ChatGPT to produce malware capable of silently searching a system for specific documents and shipping them out to Google Drive. Generative AI gives the most basic hackers a level of sophistication to unleash malicious activity using convincing, hyper-realistic content at scale.

These are just some of the ways people at this point are imaging ChatGPT will be used by adversaries. Inevitably, there are myriad innovative exploits we can’t foresee that will emerge as the creativity of offensive cyber operators fully adopt AI.

What’s a Defender to Do?

Generative AI has created a race to leverage and exploit. Traditional security measures will be inadequate to defend against these more sophisticated attacks at scale. Combating malicious Generative AI-driven attacks will require Generative AI-based solutions. Per IBM, the average time to identify and contain a data breach went from 271 days in 2016 to 277 days in 2022. Basically, we have not gotten better and the scale and complexity are about to intensify. Generative AI also has enormous potential to transform cybersecurity. Defenders can use it to:

  • Analyze large volumes of data to identify and respond to threats faster, especially given the coming onslaught of AI-authored malware;
  • Simulate social engineering attacks that train employees against these next-level sophisticated phishing attacks;
  • Improve & quicken risk assessments and compliance reporting thru better automation;
  • Provide better insights into risks and compliance issues to save teams time;
  • Simplify SIEM query writing; and write comprehensive security policies.

Generative AI can go a step further and help humans automate repetitive tasks and to communicate. This game-changing functionality is being applied outside of cyber with companies like Regie.ai, Jasper, and Copy.ai automating writing blog posts and emails. Descript generates human-like video and audio. Tome is doing it for legal services. Naturally, cybersecurity is ripe for innovation here. Microsoft just announced Security Co-pilot where they have combined security-specific models with OpenAI’s GPT-4 in a system that learns with usage. This will help analysts be radically more productive and ease the burden of hiring cybersecurity talent.

But wait, there’s more. Generative AI can go beyond content creation and conduct tasks with human-like nuance. Adept AI is a Generative AI unicorn focused on automating user actions on computers. As we move to a world with white-collar tasks being automated, security operations seem ripe for achieving some level of autonomy. In fact, the market forces created by malicious generative AI may force/accelerate such an adoption. Today’s security orchestration and automation (SOAR) capabilities will have to evolve to meet incomprehensibly overwhelming levels of alerts, incidents, and vulnerabilities. They will have to go beyond predefined playbooks and rules and automate security operations to a point of being near autonomous by executing the playbook and then learning and evolving from those actions. The result over time will be a near autonomous level of response to most incidences.

Playing it forward, these changes will dramatically impact the role of security operations. With less manual intervention needed to respond, teams will have the time and need to collaborate more across security operations teams, IT teams, legal teams, and executive management. The emphasis will go from incident detection and response to better collaboration, with AI doing the heavy lifting. One company leading the way in security collaboration is Balance Theory, a DataTribe portfolio company.

In summary, expect to continue to hear lots of hype, promise, discussion, and investment in Generative AI cyber use cases – both by defenders and attackers.  Time will tell how the technology will play out. It does not seem far-fetched to imagine a video with your child’s likeness and voice with a Nigerian prince pleading for you to pay a ransom because they were both kidnapped while on holiday and couldn’t get home.

Will Web3 & PETs Lead to a New Secure-by-Design Infrastructure Framework?

In the ‘90s, the introduction of the Web brought a significant shift in system architectures from closed, difficult to deploy client-server architectures to ones that support building systems available to massive audiences, and where new functionality can be easily and incrementally rolled-out with high frequency. Today, Web3 has started a movement to the next generation architecture.  The addition of PETs to the mix could enable this architecture to scale, and potentially have secure-by-design characteristics. 

Privacy-enhancing technologies (PETs) are a group of technologies, such as zero-knowledge proof, secure multi-party computation, homomorphic encryption, and trusted execution environment, that have been actively researched since the 1980s, but until recently, they haven’t been usable in practice. Historically, the math was just too compute-intensive to be used for real-world scenarios. Over time, however, algorithm performance has improved, and compute has continued its trek along the Moore’s Law curve.

Figure 5 - Quarterly Hiring Rates

Today, one area where there’s great interest in PETs is around helping with privacy regulation compliance. While an important use case, this isn’t a particularly revolutionary application of extremely advanced technologies that have some mind-blowing capabilities. Another area where there has been a recent explosion of PET use is in the Web3 space. The mix of PETs and blockchain technologies hints at a truly revolutionary future of secure-by-design architectures for software systems.

”Zero trust” has recently been a cybersecurity favorite buzzword. As infrastructure made the shift from secure perimeter, self-contained data centers to the cloud where security responsibilities are shared, the approach to security had to change to support a lower-trust environment. Today’s zero-trust solutions generally focus on handling this incrementally lower-trust situation, often focusing on assuming little to no trust at the network communication level. However, these solutions still assume some control over the machines running software—either direct control by an organization or control by a third party that has established contractual trust with the organization.

But what if you flipped the script? Assume all communication is on public lines; data is publicly accessible and stored anywhere; and software is run on a computer operated by anyone. How do you then build a trust and privacy control layer on top of that truly zero-trust situation?\Today’s public, permissionless, smart-contract-enabled blockchain networks, like Ethereum and Polygon, have this situation, but also have a rudimentary solution. They achieve trusted compute and data storage through a consensus mechanism. But there’s a problem: All the data is public, and the network achieves trust by requiring every node in the network to run code and then communicate with all other nodes to reach consensus on the proper results of the code—this is extremely inefficient and costly.

Now, let’s inject some privacy-enhancing technology.  First, let’s scale the trusted compute layer. As the Web3 world feverishly works on “the scale problem,” zero-knowledge proof algorithms have become increasingly popular. Using a specific type of proof called a zk-SNARK or zk-STARK, it’s possible to form a proof about some fact being true, such as a chunk of code runs properly. Once formed, the proof is quite small and easy to verify. This allows for code to be run in a trustworthy and completely distributed fashion, enabling infinite scale with a bit of overhead. Code results and corresponding verification proofs for many chunks of code can be rolled up and recorded in a succinct way in a single transaction that’s then recorded on a blockchain. There are dozens, if not hundreds, of projects working on different scaling solutions. Some of the leading projects using zero-knowledge proofs include Polygon’s zkEVM, Starkware, ZKSync, and Loopring. These are very much “first-generation” solutions, and it will likely take several generations to fully realize scalable, efficient, generalized compute.

OK, so you can trust the compute, but how do you protect data? Standard encryption using user-owned keys (e.g., private keys linked to network wallets) can protect data at rest and data in transit, but what about as the data is being processed? By using key-sharing techniques mixed with trusted execution environments, secure multi-party compute, or potentially homomorphic encryption (one of our companies, Enveil, is a leader in this space), it’s possible to construct data management schemes that allow for these chunks of code to pull in encrypted data, process it, and then store it such that the blockchain network or other infrastructure involved in running code doesn’t have access to the data. This requires overhead, but the amount is quickly decreasing due to algorithm improvements. Two examples of projects that have been working on data protecting smart contract layers are Oasis Network and Secret Network.

With both scalable-trusted compute and data-protected compute enabled by PETs, it’s possible to store, process, and transfer data without the system developer worrying about the location or who’s operating any of the infrastructure used to perform the tasks. After deployment, code can simply be trusted to run as written and to process data that can only be seen by users with appropriate access.

The benefits and efficiencies of not having to worry about the security, or really any details, of the underlying infrastructure can’t be underestimated and will dwarf any overhead required to operate the PET-based trust layer. In addition, this overhead will decrease over time, as algorithms, processing schemes, and specialized hardware are built to support this new architecture. While nothing will be a panacea that creates a completely secure environment, this technology could usher in a new framework for security-by-design systems with guardrails for both developers and users, thus leading to a significant drop in vulnerabilities and exploits that dominate today’s headlines on almost a daily basis.

We are still in early days, but there are many Web3 projects working toward this type of vision. Some of the leading projects are listed here.  These projects currently focus only on smart contract compute scaling and basic data privacy capabilities.  It will take a few more years for tools and protocols to be built that support generalized full scale compute, access-controlled structured data in relational databases, and improved user experience around key and wallet management.  But the trends are pointing in this direction, and we are excited to see how this emerging architecture shift will play out.