www.informationweek.com Open in urlscan Pro
2606:4700::6810:e3  Public Scan

Submitted URL: https://link.mail.beehiiv.com/ss/c/A7ZzaBUh_UwrqNJ8bi0pAseZSb3TWJ2fikUWIojSCcAIxKP7wDX6CfSuuLXHyBltK-Of-XC9H9S4AgQQZZE-trxPnfN...
Effective URL: https://www.informationweek.com/machine-learning-ai/the-chatbot-will-see-you-now-4-ethical-concerns-of-ai-in-health-care?utm_sou...
Submission: On October 04 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

InformationWeek is part of the Informa Tech Division of Informa PLC
Informa PLC|ABOUT US|INVESTOR RELATIONS|TALENT
This site is operated by a business or businesses owned by Informa PLC and all
copyright resides with them. Informa PLC's registered office is 5 Howick Place,
London SW1P 1WG. Registered in England and Wales. Number 8860726.

ReportsOnline EventsAdvertiseAbout

Newsletter Sign-Up

Newsletter Sign-Up

Leadership

RELATED TOPICS

 * Digital Transformation
 * IT Staffing & Careers

 * IT Management
 * IT Strategy

RECENT IN LEADERSHIP

See All
background frame architectural construction complex bend. 3d rendering
IT Leadership
Redefining the Framework of InnovationRedefining the Framework of Innovation
byGreg Sarafin, Jeff Wong
Oct 4, 2023
5 Min Read
scissors cutting red ribbon with black background
IT Leadership
Introducing the New InformationWeekIntroducing the New InformationWeek
byInformationWeek Staff
Oct 3, 2023
3 Min Read
Resilience

RELATED TOPICS

 * Cybersecurity
 * Risk Management

 * Incident Response

RECENT IN RESILIENCE

See All
female and male business colleagues shake hands
Cyber Resilience
Harmonizing the CIO and CISO Roles to Bolster SecurityHarmonizing the CIO and
CISO Roles to Bolster Security
byNate Kurtz
Oct 3, 2023
4 Min Read
Computer security concept with a screen of glowing binary digits and a "Data
Breach" warning in red text
Cyber Resilience
MOVEit Breach Victims Continue to Come to LightMOVEit Breach Victims Continue to
Come to Light
byCarrie Pallardy
Sep 29, 2023
5 Min Read
ML & AI

RELATED TOPICS

 * Responsible AI
 * IT Automation

 * AI Innovations

RECENT IN ML & AI

See All
Medical AI concept. Doctor using artificial intelligence in medicine and health
care at work
Machine Learning & AI
How AI Ethics Are Being Shaped in Health Care TodayHow AI Ethics Are Being
Shaped in Health Care Today
byCarrie Pallardy
Oct 3, 2023
16 Min Read
Robot man kneeling on ground.
Machine Learning & AI
The Big Threat to AI: Looming DisruptionsThe Big Threat to AI: Looming
Disruptions
byPam Baker
Oct 2, 2023
7 Min Read
Data

RELATED TOPICS

 * Data Privacy

 * Data Governance

RECENT IN DATA

See All
Shadowy figure depicted behind binary data.
Data Management
How to Gain Control Over Shadow AnalyticsHow to Gain Control Over Shadow
Analytics
byJohn Edwards
Oct 3, 2023
4 Min Read
Red delete key on keyboard
Data Management
California’s Delete Act: What CIOs, CDOs, Businesses Need to KnowCalifornia’s
Delete Act: What CIOs, CDOs, Businesses Need to Know
byShane Snider
Oct 2, 2023
4 Min Read
Sustainability

RELATED TOPICS

 * Green IT

 * ESG

RECENT IN SUSTAINABILITY

See All
Greenwashing concept with cardboard factory being painted green
Sustainability
Increased Greenwashing Stains ESG Efforts: ReportIncreased Greenwashing Stains
ESG Efforts: Report
byShane Snider
Oct 3, 2023
2 Min Read
A Peruvian water distribution worker with a hose splashes drinking water from a
truck in Pachacútec, Lima, Peru.
Sustainability
Water Scarcity Creates New Business RisksWater Scarcity Creates New Business
Risks
bySamuel Greengard
Oct 3, 2023
5 Min Read
Infrastructure

RELATED TOPICS

 * Data Centers

 * Cloud Computing

RECENT IN INFRASTRUCTURE

See All
Antitrust law book and gavel on a desk.
IT Infrastructure
UK Watchdog Tags Microsoft, Amazon for Cloud Antitrust ProbeUK Watchdog Tags
Microsoft, Amazon for Cloud Antitrust Probe
byShane Snider
Oct 4, 2023
2 Min Read
lightning storm over high voltage towers
IT Infrastructure
How to Prepare Your IT Organization to Survive a Major Power Grid CollapseHow to
Prepare Your IT Organization to Survive a Major Power Grid Collapse
byJohn Edwards
Oct 4, 2023
4 Min Read
Software

RELATED TOPICS

 * DevOps
 * Software Platforms

 * Operating Systems

RECENT IN SOFTWARE

See All
FICO credit score
Software & Services
FICO’s Bill Waid on DevOps for Speed and AI for QualityFICO’s Bill Waid on
DevOps for Speed and AI for Quality
byJoao-Pierre S. Ruth
Oct 4, 2023
M&A Merger And Acquisitions written on a notepad with marker.
Software & Services
5 Software Firms Ripe for M&A Plays After $28B Splunk Deal5 Software Firms Ripe
for M&A Plays After $28B Splunk Deal
byShane Snider
Sep 28, 2023
5 Slides
More

RELATED TOPICS

 * Newsletters
 * Reports/Research
 * Online Events
 * Live Events
 * Podcasts

 * White Papers
 * Advertise With Us
 * About Us
 * IT Sectors

See all resources


Sponsored By

 * Machine Learning & AI
 * Responsible AI
 * Healthcare


THE CHATBOT WILL SEE YOU NOW: 4 ETHICAL CONCERNS OF AI IN HEALTH CARETHE CHATBOT
WILL SEE YOU NOW: 4 ETHICAL CONCERNS OF AI IN HEALTH CARE

Chatbots and other AI have the potential to reshape health care, but with the
explosion of new tools come questions about ethical use and potential patient
harm.

Carrie Pallardy

September 28, 2023

14 Min Read
Den Vitruk via Alamy Stock



AT A GLANCE

 * Lack of regulations overseeing how AI is developed and used in the US health
   care system.
 * Differences between augmented intelligence and artificial intelligence.
 * Ethical pitfalls of AI: Bias, hallucinations and harmful info, privacy and
   consent, transparency.

Artificial intelligence has been quietly working in the background in health
care for years. The recent explosion of AI tools has fueled mainstream
conversations about their exciting potential to reshape the way medicine is
practiced and patient care is delivered. AI could make strides toward achieving
the long vaunted goal of precision medicine (personalized care rather than the
standardized one-size-fits-all approach more commonly found in health care). It
could reduce the administrative burden on clinicians, allowing them to step away
from the screen and bring more human interaction back to the bedside. It could
lead to more accurate diagnoses for more patients faster than the human
workforce alone could ever hope.





These possibilities are dizzying in their number and potential, but they are not
without the shadow of possible harm. The kind of radical change AI promises to
bring to health care is messy, and patient lives hang in the balance.  



While a chatbot isn’t likely to be your new doctor anytime soon, doctors and
health systems increasingly are finding ways to integrate AI into care delivery.
In 2022, the American Medical Association (AMA), a professional association and
lobbying group for physicians, conducted a survey on digital health care. Of the
1,300 physicians who responded to the survey, 18% reported using augmented
intelligence (a distinction we’ll address) for practice efficiencies and 16%
reported using it for clinical applications. Within a year, 39% plan to adopt AI
for practice efficiencies and 36% plan to adopt it for clinical applications.



Related:Why AI’s Slower Pace in Healthcare Is as It Should Be





The adoption of AI in health care is happening now, while the technology is
still nascent. There are plenty of voices calling for an implementation
framework, and many health care organizations have published statements and
guidelines. But there are yet to be any cohesive principles or regulations
overseeing how AI is being developed and put into use in the US health care
system.



Will ethics be left behind in the race to integrate AI tools into the health
care industry?


AUGMENTED INTELLIGENCE VERSUS ARTIFICIAL INTELLIGENCE

When you see the term “AI,” you likely assume it stands for artificial
intelligence. Many voices in the health care space argue that this technology’s
applications in their field earn it the title of “augmented intelligence”
instead.



The AMA opts for the term augmented intelligence, and so does the World Medical
Association (WMA), an international physician association.



“We chose the term augmented intelligence because of our deep belief in the
primacy of the patient-physician relationship and our conviction that artificial
intelligence designs can only enhance human intelligence and not replace it,”
Osahon Enabulele, MB, WMA president, tells InformationWeek via email.

Related:Experts Ponder GenAI’s Unprecedented Growth and Future



Whether considered augmented or artificial, AI is already reshaping health care.


THE ARGUMENT FOR AI IN HEALTH CARE

Last year, Lori Bruce, associate director of the Interdisciplinary Center for
Bioethics at Yale University and chair of the Community Bioethics Forum at Yale
School of Medicine, had a carcinoma. She faced all the uncertainty that comes
with a cancer diagnosis and the different treatment possibilities. AI could
dramatically reduce the time it takes to make a treatment decision in that kind
of case.



“AI isn’t a magic bullet, but for someone like me, I could someday ask it to
read all the medical literature, then I could ask it questions to see where it
might have erred -- then still make the decision myself. AI could someday give
narrower ranges for good outcomes,” Bruce tells InformationWeek via email.





And that is just one of the potential ways AI could have the power to do good in
health care. Big players in the space are working to find those promising
applications. In August, Duke Health and Microsoft announced a five-year
partnership that focuses on the different ways AI could reshape medicine.

Related:10 AI Startups to Watch



The partnership between the health system and the technology company has three
main components. The first is the creation of a Duke Health AI Innovation Lab
and Center of Excellence, the second is the creation of a cloud-first workforce,
and the third is exploring the promise of large language models (LLMs) and
generative AI in health care.



The health system and technology company plan to take a stepwise approach to AI
applications, according to Jeffrey Ferranti, MD, senior vice president and chief
digital officer of Duke Health. First, they will explore ways that AI can help
with administrative tasks. Next, they will examine its potential for content
summarization and then content creation. The final level will be patient
interaction. How could intelligent chat features engage patients?



This partnership emphasizes ethical and responsible use. The plan is to study
what is working and what isn’t and publish the results.



“The technology is out there. It's not going away. [It] can’t be put back in the
bottle, and so, all we can do is try to use it in a way that’s responsible and
thoughtful, and that’s what we’re trying to do,” says Ferranti.



Administrative support and content summarization may seem like low-hanging
fruit, but the potential rewards are worth reaping. Physician burnout reached
alarming levels during the early years of the COVID-19 pandemic. In 2021, the
AMA reported that 63% of physicians had symptoms of burnout. While burnout is a
complex issue to tackle, administrative burden routinely emerges as a driving
factor. What if AI could ease that burden?



If AI can do more of the administrative work, doctors can get back to being
doctors. If AI can read all of the latest medical research and give doctors the
highlights, they can more easily keep up with the developments in their fields.
If AI can help doctors make faster, more accurate clinical decisions, patient
care will benefit. Patient care could get even better if AI reaches the point
where it can offer accurate diagnoses and treatment planning faster than humans
can.





All of those potential “ifs” paint a bright future for medicine. But this ideal
future cannot be reached without acknowledging and effectively addressing the
ethical pitfalls of AI. Here are four to consider:


1. BIAS

An AI system is only as good as the data it is fed, and biased data can lead to
poor patient outcomes.



“We have seen some spectacular publicly demonstrated reported failures where
algorithms have actually worsened care for patients, reintroduced bias, made
things more difficult for patients of color, in particular,” says Jesse
Ehrenfeld, MD, MPH, AMA president.



A study published in the journal Science in 2019 found racial bias in an
algorithm used to identify patients with complex health needs. The algorithm
used health costs to determine health needs, but that reasoning was flawed.
“Less money is spent on Black patients who have the same level of need, and the
algorithm thus falsely concludes that Black patients are healthier than equally
sick White patients,” according to the study.



The study authors estimated that the racial bias in the algorithm cut the number
of Black patients identified for additional care by more than half.



Scientific American reported that the study authors reached out to the company
behind the algorithm and began working together to address the racial bias.



This kind of bias is not isolated, nor is it limited to racism. It is a major
ethical issue in the field of AI that could deepen racism, sexism, and economic
disparities in health care. If bias continues to go unchecked in systems like
this, ones that impact the health care treatment decisions of millions of
people, how many patients will be overlooked for the care they need?





Ethical use starts with identifying bias in data before it powers a system
impacting people’s health.



“There need to be standards for understanding and demonstrating that the data on
which AI is trained is applicable to the intended intervention and that there
should be … requirements for testing in controlled environments prior to
approval for implementation in health care settings,” says Brendan Parent, JD,
assistant professor in the department of surgery and the department of
population health at NYU Langone Health, an academic medical center.



But it is important to remain vigilant as AI systems are put into practice.
Parent also emphasizes the importance of continuous monitoring and testing. AI
models will change as they ingest new data.


2. HALLUCINATIONS AND HARMFUL INFORMATION

With respect to science fiction author Philip K. Dick, it turns out that
androids, or at least their early forerunners, might dream of electric sheep. AI
can hallucinate, an eerily human term that describes instances in which an AI
system simply makes up information. An AI system getting creative in a health
care context is obviously a problem. Made up or false information, particularly
when used in clinical decision making, can cause patient harm.



“We have to understand when and where and why these models are acting a certain
way and do some prompt engineering so that the models err on the side of fact,
[not] err on the side of creativity,” says Ferranti.



Even if the information an AI system offers to its user isn’t the product of a
hallucination, is it serving its intended purpose? We already have an early
example of the potential harm a chatbot can cause.



In May, the nonprofit National Eating Disorders Association (NEDA) announced
that it would replace the humans manning its helpline with a chatbot, Tessa. The
transition to Tessa was announced shortly after helpline staff notified the
nonprofit of plans to unionize, NPR reported. But Tessa didn’t stay online for
long.



The chatbot gave out dieting advice to people seeking support for eating
disorders, NPR reported in a follow-up piece. NEDA CEO Liz Thompson told NPR the
organization was not aware Tessa would be able to create new responses, beyond
what was scripted. The founder and CEO of Cass, the company behind Tessa,
Michiel Rauws, told NPR Tessa’s upgraded feature with generative AI was part of
NEDA’s contract.



Regardless of the finger pointing, Tessa was taken down. NEDA shared a statement
via email: “Eating disorders are complex mental health concerns, and our primary
goal is to ensure people impacted are able to connect to accurate information,
evidence-based treatment and timely care. There is still much to be learned
about how AI will function in the area of health conditions. For this reason, we
do not plan to reintroduce any AI-based programming until we have a better
understanding of how utilizing AI technology would be beneficial (and not
harmful) to those who are affected by eating disorders and other mental health
concerns.”


3. PRIVACY AND CONSENT

Protected health information (PHI) is safeguarded under the Health Insurance
Portability and Accountability Act (HIPAA). And AI introduces some interesting
privacy challenges. AI, in health care and every other industry, requires big
data. Where does the data come from? How is it shared? Do people even know if
and how their information is being used? Once it is being used by an AI system,
is data safe from prying eyes? From threat actors who would exploit and sell it?
Can users trust that the companies using AI tools will maintain their privacy?



Answering these questions as an individual is tricky, and ethical concerns come
to the fore when considering the privacy track record of some companies
participating in the health care space.



In March, the Federal Trade Commission (FTC) said that BetterHelp, a mental
health platform that connects people with licensed and credentialed therapists,
broke its privacy promises. The platform “repeatedly pushed people to take an
Intake Questionnaire and hand over sensitive health information through
unavoidable prompts,” according to the FTC. BetterHelp shared private
information of more than 7 million users with platforms like Criteo, Facebook,
Pinterest and SnapChat, according to the FTC statement. The mental health
platform will have to pay $7.8 million, and the FTC has banned it from sharing
consumer health data for advertising purposes.



BetterHelp isn’t an AI system (it does use AI to help match users with
therapists, according to Behavioral Health Business), but its handling of
privacy illuminates a troubling divide between the way people and companies are
treated.



Maria Espinola, PsyD, a licensed clinical psychologist and CEO of the Institute
for Health Equity and Innovation, tells InformationWeek about BetterHelp’s run
in with the FTC: “If I had done that as a psychologist, my license would be
taken away,” she says. For either a therapist or a corporation to sell a
patient's personal information for profit after saying they would not, Espinola
says, is a violation of trust. BetterHelp is still in business.



As AI tools are increasingly integrated into health care, will the companies
developing these tools honor the patient information privacy requirements? Will
more enforcement action become necessary?



Patients have a right to keeping their medical information private -- although
that right is hardly guaranteed in the age of data breaches and questionable
corporate practices -- but do they have a right to know if AI is being used in
their treatment or their data is being used to train an AI system?



Koko, a nonprofit online mental health support platform, stirred up consent
controversy earlier this year. In January, the company’s co-founder Rob Morris,
PhD, took to X (then Twitter) to share the results of an experiment. The company
set out to see if GPT-3 could improve outcomes on its platform.



Part of the Koko platform allows users to “send short, anonymous messages of
hope to one another,” says Morris. Users had the option to use Koko bot (GPT-3)
to respond to these short messages. They could send what GPT-3 wrote as is or
edit it. The recipients would receive the messages with a note informing them if
Koko Bot helped to craft those words. Koko used this approach for approximately
30,000 messages, according to the X thread.



The experiment led to widespread discussion of ethics and consumer consent. The
people sending the messages had the choice to use Koko Bot to help them write,
while the recipients did not have any opportunity to opt out, unless they simply
did not read the message.



In his initial thread, Morris explains that the messages composed by AI were
rated higher than those written solely by humans. But Koko pulled the
AI-supported option from its platform.



“We strongly suspected that over the longtime, this would be a detriment to our
platform and that empathy is more than just the words we say. It's more than
just having a perfectly articulated human AI generated response. It's the time
and effort you take to compose those words that's as important,” Morris
explains.



In response to the criticism the experiment sparked, Morris believes much was
due to misinterpretation. He emphasizes that users (both message senders and
recipients) were aware that AI was involved.



“I think posting it on Twitter was not the best way to do this because people
misinterpreted a host of things including how our platform works,” he says.



Koko is making some changes to the way it conducts research. It is now exploring
opportunities to conduct research through institutional review boards (IRBs),
which are independent boards that review studies to ensure they meet ethical
standards. Koko is working with Compass Ethics “to help better inform and
publicly communicate our principles and ethics guidelines,” according to Morris.


4. TRANSPARENCY

The term “black box” comes up a lot in conversations about AI. Once an AI system
is put into practice, how can users tell where its outputs are coming from? How
did it reach a specific conclusion? The answer can be trapped in that black box;
there is no way to retrace its decision-making process. Often the developers
behind these systems claim that the algorithms to create them are proprietary:
In other words, the box is going to stay closed.



“Until there is clarity, transparency around where the models were built, where
they were developed or that they had been validated. It’s really buyer beware,”
says Ehrenfeld.



He stresses the importance of transparency and always keeping a human in the
loop to supervise and correct any errors that arise from the use of an AI
system. When humans are left out of that loop, the results can be disastrous.



Ehrenfeld shares an example from another industry: aviation. In 2019, The New
York Times reported on flaws that resulted in the fatal crashes of two of
Boeing’s 737 Max planes. The planes were outfitted with an automated system
designed to gently push the plane’s nose down in rare conditions to improve
handling. That system underwent an overhaul, going from two types of sensors to
just one. Two planes outfitted with this system nosedived within minutes of
takeoff.



A series of decisions made to rush the completion of the plane meant that Max
pilots did not know about the system’s software until after the first crash,
according to The New York Times.



“We cannot make that same mistake in health care. We cannot incorporate AI
systems and not allow physicians, the end users, to know that those systems are
operating in the background,” says Ehrenfeld. He stresses the ethical obligation
to ensure there is “…the right level of transparency tied to the deployment of
these technologies so that we can always ensure the highest quality safest care
for our patients.”  




ABOUT THE AUTHOR(S)

Carrie Pallardy

Contributing Reporter

Carrie Pallardy is a freelance writer and editor living in Chicago. She writes
and edits in a variety of industries including cybersecurity, healthcare, and
personal finance.

See more from Carrie Pallardy
Never Miss a Beat: Get a snapshot of the issues affecting the IT industry
straight to your inbox.

SIGN-UP

You May Also Like

--------------------------------------------------------------------------------

Data Management

SEC Wall Street Probe Slides into Direct Messages
IT Leadership

US Bank AI Chief on Preparing Your Business for GenAI
Cyber Resilience

What Are the Biggest Lessons from the MGM Ransomware Attack?
IT Leadership

Redefining IT’s Organizational Influence
More Insights
Webinars

 * Evolution in ITSM: Navigating the New Horizon
   
   Sep 12, 2023

 * Cloud Crisis Management
   
   Aug 30, 2023

 * A Halloween Special: What We Do with AI in the Shadows -- and How to Get it
   Into the Sunlight
   
   Oct 02, 2023

 * [Virtual Event] DevSecOps Essentials That Enable Efficient Security
   
   Oct 04, 2023

 * [Virtual Event] DevSecOps Essentials That Enable Efficient Security
   
   Sep 14, 2023

More Webinars
Reports

 * You've Been Attacked Now What?

 * The New Frontier of Cyber Security: Securing the Network Edge

 * 2023 IT Salary Report

 * 2022 Retrospective: The Emergence of the Next Gen of Wi-Fi

 * 2022 State of Network Management

More Reports



EDITOR'S CHOICE

robot exercising with red dumbbells
IT Leadership
Are You Digitally Fit? Get Your Free Assessment TodayAre You Digitally Fit? Get
Your Free Assessment
byInformationWeek Staff
Sep 27, 2023
1 Min Read
Robot man kneeling on ground.
Machine Learning & AI
The Big Threat to AI: Looming DisruptionsThe Big Threat to AI: Looming
Disruptions
byPam Baker
Oct 2, 2023
7 Min Read
background frame architectural construction complex bend. 3d rendering
IT Leadership
Redefining the Framework of InnovationRedefining the Framework of Innovation
byGreg Sarafin, Jeff Wong
Oct 4, 2023
5 Min Read
Red delete key on keyboard
Data Management
California’s Delete Act: What CIOs, CDOs, Businesses Need to KnowCalifornia’s
Delete Act: What CIOs, CDOs, Businesses Need to Know
byShane Snider
Oct 2, 2023
4 Min Read
Webinars

 * Evolution in ITSM: Navigating the New Horizon
   
   Sep 12, 2023

 * Cloud Crisis Management
   
   Aug 30, 2023

 * A Halloween Special: What We Do with AI in the Shadows -- and How to Get it
   Into the Sunlight
   
   Oct 02, 2023

 * [Virtual Event] DevSecOps Essentials That Enable Efficient Security
   
   Oct 04, 2023

 * [Virtual Event] DevSecOps Essentials That Enable Efficient Security
   
   Sep 14, 2023

More Webinars
White Papers

 * EU-US Data Privacy Framework: Your questions answered

 * The New Frontier of Cyber Security: Securing the Network Edge

 * TeamDynamix Spotlight - Nutrabolt Improves ITSM Maturity

 * Three Ways Fortinet Hybrid Mesh Firewalls Secure Edge Networks

 * 2022 Retrospective: The Emergence of the Next Generation of Wi-Fi

More White Papers
Live Events

 * [Virtual Event] Connectivity Solutions for the Remote Workforce on Nov 9
   
   Nov 09, 2023

 * [Virtual Event] DevSecOps Essentials That Enable Efficient Security
   
   Oct 19, 2023

 * ICMI Contact Center Expo, October 16-19, 2023
   
   Oct 16, 2023

More Live Events

Reports

 * You've Been Attacked Now What?

 * The New Frontier of Cyber Security: Securing the Network Edge

 * 2023 IT Salary Report

 * 2022 Retrospective: The Emergence of the Next Gen of Wi-Fi

 * 2022 State of Network Management

More Reports




DISCOVER MORE

OmdiaNetwork ComputingITPro TodayData Center KnowledgeData Center World

WORKING WITH US

About UsReprintsAdvertiseContact Us

JOIN US


Newsletter Sign-Up

FOLLOW US



Copyright © 2023. All rights reserved. Informa Tech, a trading division of
Informa PLC.

Home|Cookie Policy|Privacy|Terms of Use


Cookies Button


ABOUT COOKIES ON THIS SITE

We and our partners use cookies to enhance your website experience, learn how
our site is used, offer personalised features, measure the effectiveness of our
services, and tailor content and ads to your interests while you navigate on the
web or interact with us across devices. You can choose to accept all of these
cookies or only essential cookies. To learn more or manage your preferences,
click “Settings”. For further information about the data we collect from you,
please see our Privacy Policy

Accept All
Settings



COOKIE PREFERENCE CENTER

When you visit any website, it may store or retrieve information on your
browser, mostly in the form of cookies. This information might be about you,
your preferences or your device and is mostly used to make the site work as you
expect it to. The information does not usually directly identify you, but it can
give you a more personalized web experience. Because we respect your right to
privacy, you can choose not to allow some types of cookies. Click on the
different category headings to find out more and change our default settings.
However, blocking some types of cookies may impact your experience of the site
and the services we are able to offer.
More information
Allow All


MANAGE CONSENT PREFERENCES

STRICTLY NECESSARY COOKIES

Always Active

These cookies are necessary for the website to function and cannot be switched
off in our systems. They are usually only set in response to actions made by you
which amount to a request for services, such as setting your privacy
preferences, logging in or filling in forms.    You can set your browser to
block or alert you about these cookies, but some parts of the site will not then
work. These cookies do not store any personally identifiable information.

Cookies Details‎

PERFORMANCE COOKIES

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and
improve the performance of our site. They help us to know which pages are the
most and least popular and see how visitors move around the site.    All
information these cookies collect is aggregated and therefore anonymous. If you
do not allow these cookies we will not know when you have visited our site, and
will not be able to monitor its performance.

Cookies Details‎

FUNCTIONAL COOKIES

Functional Cookies

These cookies enable the website to provide enhanced functionality and
personalisation. They may be set by us or by third party providers whose
services we have added to our pages.    If you do not allow these cookies then
some or all of these services may not function properly.

Cookies Details‎

TARGETING COOKIES

Targeting Cookies

These cookies may be set through our site by our advertising partners. They may
be used by those companies to build a profile of your interests and show you
relevant adverts on other sites.    They do not store directly personal
information, but are based on uniquely identifying your browser and internet
device. If you do not allow these cookies, you will experience less targeted
advertising.

Cookies Details‎
Back Button


BACK



Search Icon
Filter Icon

Clear
checkbox label label
Apply Cancel
Consent Leg.Interest
checkbox label label
checkbox label label
checkbox label label

 * 
   
   View Cookies
   
    * Name
      cookie name

Confirm My Choices