Close Menu
TechCentralTechCentral

    Subscribe to the newsletter

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    Facebook X (Twitter) YouTube LinkedIn
    WhatsApp Facebook X (Twitter) LinkedIn YouTube
    TechCentralTechCentral
    • News

      The real cost of a cashless economy

      16 July 2025

      Larry Ellison, 80, is now world’s second richest person

      16 July 2025

      Solly Malatsi seeks out-of-court deal in TV migration fight

      15 July 2025

      South Africa’s telcos battle to monetise 5G as 4G suffices for most

      15 July 2025

      Major new electric car brand launching in South Africa

      15 July 2025
    • World

      Grok 4 arrives with bold claims and fresh controversy

      10 July 2025

      Samsung’s bet on folding phones faces major test

      10 July 2025

      Bitcoin pushes higher into record territory

      10 July 2025

      OpenAI to launch web browser in direct challenge to Google Chrome

      10 July 2025

      Cupertino vs Brussels: Apple challenges Big Tech crackdown

      7 July 2025
    • In-depth

      The 1940s visionary who imagined the Information Age

      14 July 2025

      MultiChoice is working on a wholesale overhaul of DStv

      10 July 2025

      Siemens is battling Big Tech for AI supremacy in factories

      24 June 2025

      The algorithm will sing now: why musicians should be worried about AI

      20 June 2025

      Meta bets $72-billion on AI – and investors love it

      17 June 2025
    • TCS

      TCS+ | MVNX on the opportunities in South Africa’s booming MVNO market

      11 July 2025

      TCS | Connecting Saffas – Renier Lombard on The Lekker Network

      7 July 2025

      TechCentral Nexus S0E4: Takealot’s big Post Office jobs plan

      4 July 2025

      TCS | Tech, townships and tenacity: Spar’s plan to win with Spar2U

      3 July 2025

      TCS+ | First Distribution on the latest and greatest cloud technologies

      27 June 2025
    • Opinion

      A smarter approach to digital transformation in ICT distribution

      15 July 2025

      In defence of equity alternatives for BEE

      30 June 2025

      E-commerce in ICT distribution: enabler or disruptor?

      30 June 2025

      South Africa pioneered drone laws a decade ago – now it must catch up

      17 June 2025

      AI and the future of ICT distribution

      16 June 2025
    • Company Hubs
      • Africa Data Centres
      • AfriGIS
      • Altron Digital Business
      • Altron Document Solutions
      • Altron Group
      • Arctic Wolf
      • AvertITD
      • Braintree
      • CallMiner
      • CambriLearn
      • CYBER1 Solutions
      • Digicloud Africa
      • Digimune
      • Domains.co.za
      • ESET
      • Euphoria Telecom
      • Incredible Business
      • iONLINE
      • Iris Network Systems
      • LSD Open
      • NEC XON
      • Network Platforms
      • Next DLP
      • Ovations
      • Paracon
      • Paratus
      • Q-KON
      • SevenC
      • SkyWire
      • Solid8 Technologies
      • Telit Cinterion
      • Tenable
      • Vertiv
      • Videri Digital
      • Wipro
      • Workday
    • Sections
      • AI and machine learning
      • Banking
      • Broadcasting and Media
      • Cloud services
      • Contact centres and CX
      • Cryptocurrencies
      • Education and skills
      • Electronics and hardware
      • Energy and sustainability
      • Enterprise software
      • Fintech
      • Information security
      • Internet and connectivity
      • Internet of Things
      • Investment
      • IT services
      • Lifestyle
      • Motoring
      • Public sector
      • Retail and e-commerce
      • Science
      • SMEs and start-ups
      • Social media
      • Talent and leadership
      • Telecoms
    • Events
    • Advertise
    TechCentralTechCentral
    Home » AI and machine learning » Protecting IP and data in the AI-as-a-service era

    Protecting IP and data in the AI-as-a-service era

    Businesses need to realise that the AI revolution isn't on the horizon - it's already here.
    By Next DLP24 August 2023
    Twitter LinkedIn Facebook WhatsApp Email Telegram Copy Link
    News Alerts
    WhatsApp

    Protecting IP and data in the AI-as-a-service eraThe AI landscape, with the emergence of tools like ChatGPT, Google Bard and other large language models (LLMs), has become deeply embedded in the operational fabric of our business and personal lives.

    Its recent evolution into AI-as-a-service (AIaaS) has been a game changer. No longer are organisations required to invest heavily in building their own AI infrastructure. Instead, with AIaaS, they can conveniently harness the might of AI to optimise operations, enhance user experiences and generate previously unimaginable nsights.

    Chatbots, AI-generated content and advanced search tools are merely the tip of the iceberg.

    Understanding the nuances

    Successfully navigating the AI landscape requires a keen understanding of the nuances of these tools, their capabilities and their potential pitfalls – only then can businesses confidently and securely capitalise on AI’s immense potential.

    Each AI tool, depending on its purpose and function, comes with its own set of risks, ranging from data privacy concerns to intellectual property threats. Imagine a situation where proprietary data, once thought to be securely held, is accidentally integrated into a public-facing chatbot, or where AI-generated content unknowingly breaches copyright laws.

    These aren’t just hypothetical scenarios: they have already happened.

    The strengths and pitfalls

    In the spirit of supporting, rather than slowing down or stopping, businesses in their daily operations, we’ve compiled what we’ve found to be the most popular generative AI tools, their strengths, their pitfalls, and what businesses should consider when making the decision to use them.

    There are several categories of generative AI tools. Chatbots, for example, are used in various scenarios, from guiding website visitors to generating data-driven responses and enhancing user engagement and business intelligence for businesses in every industry.

    Next, synthetic data: AI-generated datasets are enabling businesses to circumvent the need for vast real-world data, ensuring privacy while refining algorithms. We also have AI-generated code, where AI is accelerating software development and turning mere descriptions into executable code.

    Then we have “search”, where new AI tools offer natural language responses and are reimagining what search engines are capable of. However, many like ChatGPT, remain “black box” models whose mechanisms aren’t always transparent. AI tools are also revolutionising content generation, be it converting audio to text or transforming descriptions into visuals.

    Understanding the AI risk spectrum

    With AI, there are several hypothetical risks, and many pragmatic, real-life ones. The latter include things like consumer privacy, legal issues, bias, ethics and others. The former includes machines becoming sentient and taking over the world, AI programmed for harm, or AI developing behaviour that is destructive.

    Either way, as AI tools become more integrated into our organisations, there is growing concern over the risks they pose to data security. For example, intellectual property risk is very real. Platforms continually learn and adapt from user inputs. This presents a risk that proprietary information becomes embedded within a system’s dataset. A case in point would be Samsung’s IP exposure incident after an employee interfaced with ChatGPT.

    Covering all the bases

    To counter this risk, we recommend that businesses recognise that AI tools can be channels for data leakage. More and more, workforces are using AI tools like ChatGPT to help with their daily tasks, often without considering the potential consequences of uploading proprietary or confidential data. One needs to thoroughly scrutinise and assess an AI tool’s encryption, data handling policies and ownership agreements.

    IP ownership is another issue. An AI’s output is based on its training data, potentially sourced from multiple proprietary inputs. This blurred lineage raises questions about the ownership of generated outputs. In this instance, we recommend reviewing the legal terms and conditions of AI systems and even engaging legal teams during evaluations.

    All third-party generative AI tools should be carefully reviewed to understand both the legal protections and potential exposures. There are subtleties that are crucial to consider, including those that cover ownership of intellectual property and privacy matters. Check the relevant terms and conditions periodically, as these documents may be updated without notifying users.

    Fighting AI system attacks

    Entities also need to remember that AI tools aren’t immune to hacking. Bad actors are able to manipulate these systems in order to alter their behaviour to help them achieve a malicious objective. For instance, techniques such as Indirect prompt injection can manipulate chatbots, exposing users to risks.

    As AI systems are increasingly integrated into critical components of our lives, these attacks represent a clear and present danger, with the potential to have catastrophic effects on the security not only of companies, but nations, too.

    To protect against attacks of this nature, we recommend having AI usage policies, much in the same way companies today set and review social media policies. Also, establish reporting mechanisms for irregular outputs, and prepare for potential system attacks.

    The drive to implement AI security solutions that are able to respond to rapidly changing threats makes the need to secure AI itself even more urgent. The algorithms that we rely on to detect and respond to attacks must themselves be protected from abuse and compromise.

    Keeping up with regulations

    Because data input into AI systems might be stored, it could well fall under privacy regulations such as Popia, GDPR or CCPA. Moreover, AI integrations with platforms like Facebook can further complicate data privacy landscapes.

    This is why it is key to ensure data encryption and compliance with global data protection regulations. Entities need to thoroughly understand AI providers’ data storage, anonymisation, and encryption policies. Furthermore, because AI is such a rapidly evolving and complex field, security teams must stay abreast of all developments in this sphere. Understanding the challenges is the first step in protecting your organisation.

    Using AI services requires as much diligence as any online platform. This includes understanding license agreements, using robust passwords, and promoting user awareness. This is why cyber hygiene training needs to be prioritised, multi-factor authentication set up, and stringent password policies enforced.

    AI-as-a-service era

    Historically, businesses may have been complacent about data submissions due to a lack of awareness, limited regulatory consequences and the absence of high-profile data breaches. However, with the advent of AIaaS, data is being used more and more to train models, which amplifies the risks. As AIaaS becomes ubiquitous, safeguarding sensitive data is paramount to maintaining trust, ensuring regulatory compliance, and preventing potential misuse or exposure of proprietary information.

    All businesses should consider deploying data loss prevention tools to monitor and control data submissions to AI services. These can recognise and classify sensitive data, preventing inadvertent exposures.

    Businesses need to realise that the AI revolution isn’t on the horizon — it’s already here. As AI becomes more entrenched in our operational processes, we need to harness its power, yet navigate its risks judiciously. By understanding potential dangers and adopting holistic protection strategies, organisations can strike a balance between innovation and security.

    About Next
    Next DLP (“Next”) is a leading provider of insider risk and data protection solutions. The Reveal Platform by Next uncovers risk, stops data loss, educates employees and fulfils security, compliance and regulatory needs. The company’s leadership brings decades of cyber and technology experience from Fortra (previously HelpSystems), DigitalGuardian, Crowdstrike, Forcepoint, Mimecast, IBM, Cisco and Veracode. Next is trusted by organisations big and small, from the Fortune 100 to fast-growing healthcare and technology companies. For more, visit nextdlp.com, or connect on LinkedIn or YouTube.

    • Read more articles by Next DLP on TechCentral
    • This promoted content was paid for by the party concerned


    ChatGPT NeXT Next DLP OpenAI Samsung
    Subscribe to TechCentral Subscribe to TechCentral
    Share. Facebook Twitter LinkedIn WhatsApp Telegram Email Copy Link
    Previous ArticleCallMiner named only leader in Conversation Intelligence for Customer Service
    Next Article Nvidia soars on AI optimism

    Related Posts

    Zuckerberg used open source to scale AI – now the lock-in begins

    14 July 2025

    Grok 4 arrives with bold claims and fresh controversy

    10 July 2025

    Samsung’s bet on folding phones faces major test

    10 July 2025
    Add A Comment

    Comments are closed.

    Company News

    Mental wellness at scale: how Mac fuels October Health’s mission

    15 July 2025

    Banking on LEO: Q-KON transforms financial services connectivity

    14 July 2025

    The future of business calling: Voys brings your landline to the cloud

    14 July 2025
    Opinion

    A smarter approach to digital transformation in ICT distribution

    15 July 2025

    In defence of equity alternatives for BEE

    30 June 2025

    E-commerce in ICT distribution: enabler or disruptor?

    30 June 2025

    Subscribe to Updates

    Get the best South African technology news and analysis delivered to your e-mail inbox every morning.

    © 2009 - 2025 NewsCentral Media

    Type above and press Enter to search. Press Esc to cancel.