• Blog
  • Survey Reveals Consumer Sentiment on AI-Created Apps

Blog

Survey Reveals Consumer Sentiment on AI-Created Apps

Get details on our survey of 1,000 consumers that gauges their knowledge of and concerns about AI in app development.

AI is transforming how software gets built. A recent CodeSignal survey found that 81% of developers surveyed reported using AI-powered coding assistants. Among these users, 49% utilize them daily, and 39% use them weekly. 

In a recent survey of 400 security professionals and developers conducted by Legit, 96% of security and software development professionals report that their companies use GenAI-based solutions for building or delivering applications. Nearly 90% of developers surveyed report using AI coding assistants.  

In addition, software itself is now different thanks to this shift – AI is part of the application architecture. Every software product today has some AI component that is fetching data or communicating with other tools.  

Those in the tech industry are well aware of this shift, and most are aware of the risks involved. But what about those outside the tech world? Are they aware? Concerned? They are most definitely purchasing and interacting with applications developed with AI. Do they know this? Should they? 

In addition, the bar to developing apps has certainly been lowered with AI, which should give consumers pause. Does it? 

In honor of Cybersecurity Awareness Month, we surveyed 1,000 consumers to find out. Here are the highlights and takeaways … 

Less than a quarter believe the majority of code is AI generated 

Only 22% of respondents believe a typical mobile app’s code is mostly AI-generated.

 

AI-generated apps post

 

78% of respondents believe at least half of code is written directly by developers. Not only do most consumers overestimate how much code is still human-written, but 29% say they don’t even fully understand how apps are built in the first place. 

Should they know? Some of the other responses indicate yes … 

Consumers unsure whether AI affects their trust of an app 

One-quarter of respondents noted they would lose trust if they learned their favorite app uses AI-written code. 51% stated it would have no effect on their trust. Interestingly, 34% of Gen Z respondents noted that it would increase their trust. 

While most consumers won’t outright avoid AI-coded apps, nearly half (47%) say they are concerned about AI in apps. 

 

concerned about AI

 

Vendor transparency about if and how an app is using or is developed with AI would go a long way here. 

What raises red flags for app consumers? Vibe coding

Vendors take note -- 31% of respondents say sensitive data requests tops their lists of app concerns.  

What signals app security for consumers? 

What do consumers feel indicates security in an app?  

53% say appearance in official app stores, 46% say privacy policies, and 45% say well-known brands.  

What is the breakdown by generation? Privacy policy (63%) is the most influential for Boomers, official app store is most influential for Gen X (53%) and Millennials (52%), and a well-known brand (50%) is most influential for Gen Z   

Of note -- Professional design would make 42% of Millennials and Gen Z believe an app is secure, as opposed to only 16% of Boomers. 

 

millennials & Gen Z post

 

How would consumers respond to an AI-related vulnerability in an app? 

26% of consumers would try to avoid all apps with AI-generated code if they learned that AI-generated code caused a vulnerability in an app they used, and 33% would be more cautious. 

How does the risk tolerance for AI-generated apps differ among generations? 

Overall, younger consumers show higher risk tolerance for AI apps. Boomers are more likely to worry that “AI might introduce security vulnerabilities” (41%) vs. 28% of Millennials. After an AI-related vulnerability, 35% of Boomers would avoid AI apps entirely, compared to just 23% of Gen Z. Boomers are nearly 2x more likely to lose trust if they find out AI was used to develop their favorite app. 

Who do consumers think is responsible for protecting personal data in an app? 

Consumers see themselves (34%) and app developers (37%) as nearly equally responsible for protecting personal data in an app. Boomers (55%) and Gen X (39%) see themselves as most responsible, while Millennials (44%) and Gen Z (44%) see developers as most responsible. 

Takeaways for consumers and vendors 

Ultimately, AI will bring robust new software capabilities, but it also expands the attack surface. Most consumers are unaware of the extent of AI in use in modern apps. Although not opposed to AI use in apps, most do have concerns. Apps sharing too much or sensitive data is the top app-related concern across generations.  

Based on the responses, vendor transparency around their AI use and security practices should be a priority. The companies creating the software bear the primary security responsibility, but consumers should be aware of the heightened risks introduced by AI and take some steps to protect themselves. Consumers should double-down on basic online safety guidance that was relevant before AI, and is even more so today.   

Vendor takeaways 

Software vendors should be transparent about where and how they are using AI, and offer opt-in choices if sensitive data is involved. They also need to adjust their security practices for this new attack surface and be upfront about the risks and their steps to mitigate it.  

Legit recommends: 

  • Discovery: AI visibility is now a key part of AppSec. The ability to identify AI-generated code, and where and how AI is in use in your software development environment has become critical.
     
  • Security testing: AI-specific security testing has become vital. AI brings in some novel vulnerabilities and weaknesses that traditional scanners can’t find, such as training model poisoning, excessive agency, and others detailed in OWASP’s LLM & Gen AI Top 10.
     
  • Threat modeling: As the risk to the organization is changing, so too must threat models. If your app now exposes AI interfaces, is running an agent, or gets input from users and uses the model to process it, you’ve got new risks.
     
  • Awareness of toxic combinations: The use of AI in code development itself is not necessarily a risk. But when its use is combined with another risk, like lack of static analysis or branch protection, the risk level rises. These “toxic combinations” require both discovering which development pipelines are using GenAI to create code, and then ensuring those pipelines have all the appropriate security measures and guardrails in place. 

Consumer takeaways 

It’s important to note that modern AI-first apps may be collecting more sensitive information (voice, images, location, behavioral data) to train models or personalize experiences. 

Here are a few standard best practices that are worth re-stating and reinforcing in this AI era:  

  • Update: Keep your phones and computers updated to the most recent operating system versions with the most recent security updates.
     
  • Unique passwords: You up your chances of exposing your data if you use the same username and password combination across apps. Use a unique password for each app.
     
  • Enable two-factor authentication: Whenever possible, enable two-factor authentication to create an extra layer of protection.
     
  • Limit app permissions – especially relevant in an era of AI-led software development. Always check what permissions the app has and limit them to only what’s necessary.
     
  • Delete unused apps that are unnecessarily exposing you to risk.
     
  • Beware suspicious emails or links. Don’t engage with emails from unknown users, or click on links that come from an untrusted source or that have unusual formatting or characters. 

 Learn more 

Get more details on how AI is affecting application security in our new guide, AppSec in the Age of AI. 




 

 

 

Get a stronger AppSec foundation you can trust and prove it’s doing the job right.

Request a Demo