Close Menu
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Inovalon Launches Medical Trial Eligibility Screener to Speed up Trial Recruitment 

January 20, 2026

Sensera Methods Launches New Performance for SiteCloud Insights

January 20, 2026

BionIT Labs Launches Adam’s Hand for Humanoids and Service Robots

January 20, 2026
Facebook X (Twitter) Instagram
Smart Homez™
Facebook X (Twitter) Instagram Pinterest YouTube LinkedIn TikTok
SUBSCRIBE
  • Home
  • AI News
  • AI Startups
  • Deep Learning
  • Interviews
  • Machine-Learning
  • Robotics
Smart Homez™
Home»Machine-Learning»AiThority Interview with Yoav Regev, CEO and co-founder at Sentra
Machine-Learning

AiThority Interview with Yoav Regev, CEO and co-founder at Sentra

Editorial TeamBy Editorial TeamJuly 1, 2025Updated:July 1, 2025No Comments8 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Reddit WhatsApp Email
AiThority Interview with Yoav Regev, CEO and co-founder at Sentra
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


Yoav Regev, CEO and co-founder at Sentra feedback on the safety protocols that information groups ought to give attention to extra as AI turns into mainstream to this crucial enterprise operate on this AiThority interview:

____________

Hello Yoav, inform us about your position at Sentra and your journey by means of the tech ecosystem.

I’m at the moment CEO and co-founder at Sentra. My journey to founding Sentra was formed by many years of expertise securing delicate information in advanced environments. I served as the pinnacle of the cyber division in Unit 8200, the elite Israeli Army Intelligence, for practically 25 years earlier than transitioning into entrepreneurship.

All through my tenure at Unit 8200, it was clear that delicate information had turn out to be probably the most useful asset to organizations and adversaries. I seen that within the personal sector, the enterprises that have been leveraging information securely have been producing new insights, growing new merchandise, offering higher experiences, and separating themselves from the competitors. On the opposite aspect, as information grew to become extra useful, it additionally grew to become an even bigger goal for risk actors. As the amount and the affect of delicate information grew, so did the significance of discovering the simplest option to safe it.

After ending my service at  Unit 8200, I joined Sentra’s co-founders, Asaf Kochan, Ron Reiter, and Yair Cohen, to create a knowledge safety firm for the cloud and AI period. We see that as the foremost drawback for a lot of the organizations on the planet. That’s the fundamental pillar driving their enterprise.

We’d love the highlights out of your recent funding spherical, what can customers count on when it comes to product enhancements within the close to future?

We not too long ago closed a $50 million Collection B, bringing Sentra’s complete funding to over $100 million. The spherical was led by Key1 Capital and took part in by our present traders, Bessemer Enterprise Companions, Zeev Ventures, Customary Investments, and Munich Re Ventures. Main as much as the funding, Sentra skilled a greater than 300% year-over-year enhance in income and the addition of a number of new Fortune 500 clients.

Constructing on that momentum, we launched our Knowledge Safety for AI Brokers answer. Designed to deal with the rising challenges related to AI assistants, our strategy ensures that organizations can embrace AI innovation securely and responsibly. Key capabilities embrace computerized discovery of AI brokers and their linked information bases together with classification of the delicate information inside, real-time monitoring for unauthorized information entry, and detailed visibility into AI-generated responses to stop information leaks and guarantee compliance.

As information’s journey evolves, so does Sentra’s product roadmap. Customers can count on that we’ll proceed to innovate our portfolio to mirror the wants of information safety, privateness and governance groups – whereas additionally doubling down on the core capabilities that set Sentra aside: best-in-class information discovery, extremely correct classification, and broad protection throughout all environments.

Additionally Learn: AiThority Interview with Pete Foley, CEO of ModelOp

What ought to organizations be doing extra of to make sure higher information hygiene and information cleansing processes at a time when most boast of unhealthy information?

The primary step to fixing your information safety situation is recognizing that information is your most useful asset — and it’s frequently shifting round your clouds and higher ecosystem, so a brand new extra agile/scalable strategy is required. When you settle for that, organizations can give attention to a couple of key steps:

  1. Get full visibility of their complete delicate information. Earlier than any significant information safety work can start, organizations will need to have a transparent, real-time view of the place delicate information lives throughout their cloud, SaaS, and on-prem environments. With out this visibility, it’s unattainable to evaluate threat, apply correct controls, or meet compliance necessities. Discovery have to be steady, not a one-time effort.
  2. Automate safety duties. Even with the proliferation of AI, some organizations are hesitant to undertake the expertise for his or her safety stack. I like to recommend that safety groups overcome this concern and use AI and different automation instruments to eradicate repetitive and resource-intensive duties equivalent to information discovery and classification.
  3. Uplevel delicate information safety. Guarantee correct information safety posture by figuring out delicate information regardless of the place it resides. Put controls in place in order that delicate information is simply accessible to licensed personnel. Frequently monitor information entry for uncommon exercise. Automate the creation of assist tickets for safety incidents, provoke automated remediations by way of integrations with the safety stack controls, and prioritize high-risk alerts.
  4. Implement risk-based permissioning. There need to be clear procedures for managing authentication credentials. Apply actions primarily based on threat ranges. For instance, fast entry revocation for low-risk instances and verification for crucial credentials.
  5. Have concrete information mapping methods in place. With well-defined information mapping methods, organizations can guarantee information is saved within the applicable locations and complies with rules.
  6. Assign accountability. Encourage your workers, no matter position, to take private accountability for information safety.

Fortuitously, information safety posture administration (DSPM) options can automate and deal with all of those steps for organizations, lowering the burden on safety groups.

What added practices ought to information and advertising and marketing/ops groups be specializing in after they use AI to allow several types of workflows?

Earlier than information and advertising and marketing groups incorporate any AI into workflows, organizations should sit down and description a proactive safety strategy for AI shifting ahead. With this, firms can make it possible for AI enhances, not compromises, safety. To do that, organizations ought to:

  • Create strict tips for information sharing and information hygiene inside AI platforms
  • Share clear AI utilization insurance policies primarily based on zero belief and least privilege ideas
  • Guarantee they management which information will get into the AI methods/fashions
  • Combine AI safety into company-wide cybersecurity coaching to teach workers on the newest AI threats

Additionally Learn: AiThority Interview with Dr. William Bain, CEO and Founding father of ScaleOut Software program

What different safety protocols ought to information groups be aware about as AI turns into extra mainstream to their processes?

No mannequin is totally resistant to privateness and safety dangers in real-world situations, so leveraging automated options for ongoing monitoring is essential to sustaining AI safety.  It’s essential to have safety embedded into AI purposes from the beginning. Doing so units builders and safety groups up for fulfillment lengthy earlier than an AI utility goes to market. Key steps embrace figuring out the place delicate information resides and making certain good safety posture, eradicating or de-identifying delicate information from coaching units, testing fashions for adherence to privateness rules throughout pre-production, and implementing steady monitoring all through the event lifecycle.

5 ideas on the way forward for AI earlier than we wrap up?

  1. AI rules are coming. Colorado set the usual final yr for the primary complete AI laws targeted on client protections and security. In 2025 alone, there have been no less than 550 AI payments launched in 45 states and Puerto Rico. Similar to we noticed with GDPR, HIPAA, and CCPA within the information realm, we’re going to see organizations having to navigate AI governance as lawmakers work to create coverage to maintain the expertise protected.
  2. AI goes to extend situations of shadow and duplicate information. As AI adoption continues, information will proliferate sooner than we have now seen with the cloud, leaving shadow information in its midst. Shadow information is any information that exists outdoors of a safe information administration framework. As a result of it typically exists with out the information of or correct administration by the safety crew, it’s thought of a high goal for risk actors. Organizations have to make use of safety controls that stick with the info — no matter the place it goes.
  3. Least privilege entry will transfer from a nice-to-have to vital for AI methods. Safety groups need to implement the precept of least privilege to AI methods. This appears to be like like solely giving AI fashions entry to the info it wants and no extra, in the end minimizing threat of misuse, information leakage, and breaches.
  4. Defending the integrity and privateness of information in massive language fashions (LLMs) will turn out to be important. Organizations have to have accountable and moral AI purposes, and the one means to do that is to carry a steadfast dedication to integrity and privateness. By implementing a number of the greatest practices I discussed above, organizations can mitigate dangers related to information leakage and unauthorized entry.
  5. We’re solely starting to know AI’s potential — and its downfalls. Agentic AI is coming and its autonomy is able to reworking enterprise crucial operations, rising productiveness, and decreasing prices. Nonetheless, its autonomy additionally introduces important safety dangers. It’s going to require collaboration throughout the safety ecosystem to maintain AI threats at bay.

[To share your insights with us, please write to psen@itechseries.com]



Supply hyperlink

Editorial Team
  • Website

Related Posts

Inovalon Launches Medical Trial Eligibility Screener to Speed up Trial Recruitment 

January 20, 2026

BionIT Labs Launches Adam’s Hand for Humanoids and Service Robots

January 20, 2026

PacketFabric and Massed Compute Introduce Trade’s First Built-in GPUaaS & NaaS Providing for Enterprise AI

January 19, 2026
Misa
Trending
Machine-Learning

Inovalon Launches Medical Trial Eligibility Screener to Speed up Trial Recruitment 

By Editorial TeamJanuary 20, 20260

New API Accelerates Trial Enrollment by Delivering Close to-Immediate Affected person Eligibility Insights  Inovalon, a…

Sensera Methods Launches New Performance for SiteCloud Insights

January 20, 2026

BionIT Labs Launches Adam’s Hand for Humanoids and Service Robots

January 20, 2026

EdgeAI Launches Technical Whitepaper Detailing a Subsequent-Technology Decentralized Knowledge Structure for Edge AI

January 20, 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Our Picks

Inovalon Launches Medical Trial Eligibility Screener to Speed up Trial Recruitment 

January 20, 2026

Sensera Methods Launches New Performance for SiteCloud Insights

January 20, 2026

BionIT Labs Launches Adam’s Hand for Humanoids and Service Robots

January 20, 2026

EdgeAI Launches Technical Whitepaper Detailing a Subsequent-Technology Decentralized Knowledge Structure for Edge AI

January 20, 2026

Subscribe to Updates

Get the latest creative news from SmartMag about art & design.

The Ai Today™ Magazine is the first in the middle east that gives the latest developments and innovations in the field of AI. We provide in-depth articles and analysis on the latest research and technologies in AI, as well as interviews with experts and thought leaders in the field. In addition, The Ai Today™ Magazine provides a platform for researchers and practitioners to share their work and ideas with a wider audience, help readers stay informed and engaged with the latest developments in the field, and provide valuable insights and perspectives on the future of AI.

Our Picks

Inovalon Launches Medical Trial Eligibility Screener to Speed up Trial Recruitment 

January 20, 2026

Sensera Methods Launches New Performance for SiteCloud Insights

January 20, 2026

BionIT Labs Launches Adam’s Hand for Humanoids and Service Robots

January 20, 2026
Trending

EdgeAI Launches Technical Whitepaper Detailing a Subsequent-Technology Decentralized Knowledge Structure for Edge AI

January 20, 2026

PacketFabric and Massed Compute Introduce Trade’s First Built-in GPUaaS & NaaS Providing for Enterprise AI

January 19, 2026

Webjuice Launches AI-Pushed search engine optimisation Dublin Technique To Dominate 2026 Search Tendencies

January 19, 2026
Facebook X (Twitter) Instagram YouTube LinkedIn TikTok
  • About Us
  • Advertising Solutions
  • Privacy Policy
  • Terms
  • Podcast
Copyright © The Ai Today™ , All right reserved.

Type above and press Enter to search. Press Esc to cancel.