www.pipelinepub.com Open in urlscan Pro
205.251.188.25  Public Scan

Submitted URL: https://news.google.com/rss/articles/CBMimAFBVV95cUxPNDNmQ2dGeDlpanZVS3JUZUNFWjhHVzZVc1p5czZhUS13UFVUZnV6OExucW1TeFFzNjM...
Effective URL: https://www.pipelinepub.com/cybersecurity-assurance-2024/open-source-and-ethical-AI-standards
Submission: On November 06 via api from DE — Scanned from AU

Form analysis 0 forms found in the DOM

Text Content

 * Home
 * Subscribe
 * Knowledge Center
    * Research Center
    * News Center
    * Webinars
    * Sponsors
    * Innovation Awards

 * About Pipeline
    * Industry Advisory Board
    * Marketing Opportunities
    * Advertising Placements
    * Pipeline Memberships
    * Editorial Calendar
    * Request Media Kit
    * Contact Us
    * Subscribe

 * Innovation Awards
    * Enter Awards
    * Award Judges
    * Award Trophies

   
 * Executive Summits
    * Upcoming Summits
       * ICTXS Europe
       * ICTXS West
   
    * Past Summits
       * ICTXS East
       * ICTXS West

   November 2024, Volume 21, Issue 1

 * Past Issues
 * News Center
 * Research Center
    * View Research Center
    * Upload Assets

 * Webinars
    * View On-Demand Webinars
    * Order Webinars

 * Events
    * ICTXS Europe
    * Industry Event Partnerships

 * Sponsors
   
 * Members
    * Membership Directory
    * Become a Member
    * Members Portal


SUBSCRIBE NOW
IN THIS ISSUE
 * Cybersecurity for ISPs
 * AI to Minimize Satellite Outages
 * AI & Quantum Threat Prevention
 * SATCOM + 5G for Resilliance
 * DT Assurance for CSPs
 * Cybersecurity Certification
 * GenAI Social Attack Prevention
 * Assurance for Satellite IoT
 * Open & Ethical AI Standards
 * Automotive Data Privacy
 * Letter from the Editor
 * IT & Telecom Industry News
 * Article Index

PIPELINE RESOURCES
 * Past Issues
 * News Center
 * Research Center
 * Webinars
 * Events
 * Sponsors
 * Subscribe
 * Marketing Opportunities
 * Advertising Placements
 * Editorial Opportunities
 * Pipeline Memberships


THE ROLE OF COMMUNITY IN AI SAFETY: HOW
OPEN SOURCE PROJECTS ARE LEADING THE WAY

ORDER REPRINTS DOWNLOAD COMMENT DISCUSS SHARE
 * 1
 * 2
 * 

By: Huzaifa Sidhpurwala


Artificial intelligence (AI) is transforming industries at a rapid pace,
bringing opportunities to optimize operations, enhance customer experience, and
drive innovation. However, as AI becomes more deeply embedded in critical
processes across sectors, concerns around its safety, ethics, and fairness have
become more pronounced. Addressing these challenges requires more than
technological advancements or regulatory frameworks; it calls for a proactive
approach that places community engagement at the forefront of AI stewardship.

In this article, we explore how grassroots initiatives and open source projects
are making strides in establishing safety practices and ethical standards for
AI. These initiatives have proven to be transformative, not only because they
advance technical development, but because they also foster collaboration,
transparency, and diversity. These elements are essential for responsible AI
innovation. For IT leaders navigating this evolving landscape, understanding the
value of community engagement is key to harnessing AI’s potential in a safe,
ethical, and impactful manner.


THE NEED FOR ETHICAL AI AND SAFETY STANDARDS

As AI applications grow, concerns around unintended consequences, biased
decision making, and opaque algorithms are increasingly front and center. For
organizations deploying AI, these risks are not just ethical dilemmas, they
represent potential liabilities that can undermine trust with stakeholders and
expose enterprises to regulatory penalties.

Establishing clear ethical guidelines and safety standards can help mitigate
these risks, ensuring that AI systems are transparent, fair, and aligned with
societal values. By prioritizing ethical considerations and robust safety
protocols, organizations can foster trust, enhance accountability and create AI
solutions that benefit everyone. Ethical standards are not merely bureaucratic
hurdles, they are a cornerstone of effective AI governance. These standards can
provide companies with a competitive advantage, as customers are more likely to
trust and adopt AI solutions that are designed to be fair and unbiased.

Trust is a critical factor in the widespread adoption of AI technologies. As AI
continues to play a greater role in everything from financial decision-making to
healthcare diagnostics, the need for transparent systems has never been more
important. In many ways, open source AI projects form a key avenue for building
this trust, especially around AI safety and ethics. Open source initiatives
allow diverse stakeholders to inspect, audit and contribute to the code and
models, thus ensuring that the resulting systems are more robust, ethical and
inclusive.


OPEN SOURCE INITIATIVES LEADING THE WAY

Open source projects have become pivotal in driving responsible AI development.
While there is no shortage of open source initiatives and projects in this
field, the following two are worth mentioning:

MLCommons is an AI engineering consortium built on a philosophy of open
collaboration to improve AI systems. It is a core engineering group consisting
of individuals from both academia and industry. The consortium focuses on
accelerating machine learning through open-source projects that address critical
areas, including benchmarking, safety, and accessibility. Some of the notable
work done by their AI risk and reliability group includes the MLCommons safety
taxonomy and their safety benchmarks. Their taxonomy is currently being used by
prominent model providers like Meta and Google.

MLCommons has also made significant contributions in standardizing AI
benchmarks, which helps in assessing the performance of various AI models
against safety and reliability standards. The benchmarks set by MLCommons serve
as reference points for developers, enabling them to evaluate how well their
models align with established safety and ethical guidelines. The inclusive and
collaborative nature of MLCommons helps ensure that these benchmarks are
developed with a wide range of stakeholders, thereby making them more applicable
and reliable across different domains and industries.

The Coalition for Secure AI (CoSAI) is another notable open ecosystem of AI and
security experts from leading industry organizations. They are dedicated to
sharing best practices for secure AI deployment and collaborating on AI security
research and product development. Their AI Risk Governance workstream is working
on developing

 * 1
 * 2
 * 




FEATURED SPONSOR:

Latest Updates









✓
Thanks for sharing!
AddToAny
More…

© 2024, All information contained herein is the sole property of Pipeline
Publishing, LLC. Pipeline Publishing L.L.C. reserves all rights and privileges
regarding
the use of this information. Any unauthorized use, such as copying, modifying,
or reprinting, will be prosecuted under the fullest extent under the governing
law.