philosophicaldisquisitions.blogspot.com Open in urlscan Pro
2a00:1450:4001:82f::2001  Public Scan

Submitted URL: http://philosophicaldisquisitions.blogspot.com/
Effective URL: https://philosophicaldisquisitions.blogspot.com/
Submission: On November 27 via api from US — Scanned from DE

Form analysis 9 forms found in the DOM

Name: mc-embedded-subscribe-formPOST //blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&id=5cb9ef6d67

<form action="//blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&amp;id=5cb9ef6d67" class="validate" id="mc-embedded-subscribe-form" method="post" name="mc-embedded-subscribe-form" novalidate="" target="_blank">
  <div id="mc_embed_signup_scroll">
    <label>Subscribe to the newsletter</label>
    <input class="email" id="mce-EMAIL" name="EMAIL" required="" type="email" value="">
    <!--real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
    <div aria-hidden="true" style="left: -5000px; position: absolute;"><input name="b_58eb058b33241976ce21bc706_5cb9ef6d67" tabindex="-1" type="text" value=""></div>
    <div class="clear"><input class="button" id="mc-embedded-subscribe" name="subscribe" type="submit" value="Subscribe"></div>
  </div>
</form>

Name: mc-embedded-subscribe-formPOST //blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&id=5cb9ef6d67

<form action="//blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&amp;id=5cb9ef6d67" class="validate" id="mc-embedded-subscribe-form" method="post" name="mc-embedded-subscribe-form" novalidate="" target="_blank">
  <div id="mc_embed_signup_scroll">
    <label>Subscribe to the newsletter</label>
    <input class="email" id="mce-EMAIL" name="EMAIL" required="" type="email" value="">
    <!--real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
    <div aria-hidden="true" style="left: -5000px; position: absolute;"><input name="b_58eb058b33241976ce21bc706_5cb9ef6d67" tabindex="-1" type="text" value=""></div>
    <div class="clear"><input class="button" id="mc-embedded-subscribe" name="subscribe" type="submit" value="Subscribe"></div>
  </div>
</form>

Name: mc-embedded-subscribe-formPOST //blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&id=5cb9ef6d67

<form action="//blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&amp;id=5cb9ef6d67" class="validate" id="mc-embedded-subscribe-form" method="post" name="mc-embedded-subscribe-form" novalidate="" target="_blank">
  <div id="mc_embed_signup_scroll">
    <label>Subscribe to the newsletter</label>
    <input class="email" id="mce-EMAIL" name="EMAIL" required="" type="email" value="">
    <!--real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
    <div aria-hidden="true" style="left: -5000px; position: absolute;"><input name="b_58eb058b33241976ce21bc706_5cb9ef6d67" tabindex="-1" type="text" value=""></div>
    <div class="clear"><input class="button" id="mc-embedded-subscribe" name="subscribe" type="submit" value="Subscribe"></div>
  </div>
</form>

Name: mc-embedded-subscribe-formPOST //blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&id=5cb9ef6d67

<form action="//blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&amp;id=5cb9ef6d67" class="validate" id="mc-embedded-subscribe-form" method="post" name="mc-embedded-subscribe-form" novalidate="" target="_blank">
  <div id="mc_embed_signup_scroll">
    <label>Subscribe to the newsletter</label>
    <input class="email" id="mce-EMAIL" name="EMAIL" required="" type="email" value="">
    <!--real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
    <div aria-hidden="true" style="left: -5000px; position: absolute;"><input name="b_58eb058b33241976ce21bc706_5cb9ef6d67" tabindex="-1" type="text" value=""></div>
    <div class="clear"><input class="button" id="mc-embedded-subscribe" name="subscribe" type="submit" value="Subscribe"></div>
  </div>
</form>

Name: mc-embedded-subscribe-formPOST //blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&id=5cb9ef6d67

<form action="//blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&amp;id=5cb9ef6d67" class="validate" id="mc-embedded-subscribe-form" method="post" name="mc-embedded-subscribe-form" novalidate="" target="_blank">
  <div id="mc_embed_signup_scroll">
    <label>Subscribe to the newsletter</label>
    <input class="email" id="mce-EMAIL" name="EMAIL" required="" type="email" value="">
    <!--real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
    <div aria-hidden="true" style="left: -5000px; position: absolute;"><input name="b_58eb058b33241976ce21bc706_5cb9ef6d67" tabindex="-1" type="text" value=""></div>
    <div class="clear"><input class="button" id="mc-embedded-subscribe" name="subscribe" type="submit" value="Subscribe"></div>
  </div>
</form>

Name: mc-embedded-subscribe-formPOST //blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&id=5cb9ef6d67

<form action="//blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&amp;id=5cb9ef6d67" class="validate" id="mc-embedded-subscribe-form" method="post" name="mc-embedded-subscribe-form" novalidate="" target="_blank">
  <div id="mc_embed_signup_scroll">
    <label>Subscribe to the newsletter</label>
    <input class="email" id="mce-EMAIL" name="EMAIL" required="" type="email" value="">
    <!--real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
    <div aria-hidden="true" style="left: -5000px; position: absolute;"><input name="b_58eb058b33241976ce21bc706_5cb9ef6d67" tabindex="-1" type="text" value=""></div>
    <div class="clear"><input class="button" id="mc-embedded-subscribe" name="subscribe" type="submit" value="Subscribe"></div>
  </div>
</form>

Name: mc-embedded-subscribe-formPOST //blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&id=5cb9ef6d67

<form action="//blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&amp;id=5cb9ef6d67" class="validate" id="mc-embedded-subscribe-form" method="post" name="mc-embedded-subscribe-form" novalidate="" target="_blank">
  <div id="mc_embed_signup_scroll">
    <label>Subscribe to the newsletter</label>
    <input class="email" id="mce-EMAIL" name="EMAIL" required="" type="email" value="">
    <!--real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
    <div aria-hidden="true" style="left: -5000px; position: absolute;"><input name="b_58eb058b33241976ce21bc706_5cb9ef6d67" tabindex="-1" type="text" value=""></div>
    <div class="clear"><input class="button" id="mc-embedded-subscribe" name="subscribe" type="submit" value="Subscribe"></div>
  </div>
</form>

Name: mc-embedded-subscribe-formPOST //blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&id=5cb9ef6d67

<form action="//blogspot.us14.list-manage.com/subscribe/post?u=58eb058b33241976ce21bc706&amp;id=5cb9ef6d67" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" novalidate="">
  <div id="mc_embed_signup_scroll">
    <label for="mce-EMAIL">Sign-up for the Newsletter</label>
    <input type="email" value="" name="EMAIL" class="email" id="mce-EMAIL" placeholder="email address" required="">
    <!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
    <div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_58eb058b33241976ce21bc706_5cb9ef6d67" tabindex="-1" value=""></div>
    <div class="clear"><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div>
  </div>
</form>

https://philosophicaldisquisitions.blogspot.com/search

<form action="https://philosophicaldisquisitions.blogspot.com/search" class="gsc-search-box" target="_top">
  <table cellpadding="0" cellspacing="0" class="gsc-search-box">
    <tbody>
      <tr>
        <td class="gsc-input">
          <input autocomplete="off" class="gsc-input" name="q" size="10" title="search" type="text" value="">
        </td>
        <td class="gsc-search-button">
          <input class="gsc-search-button" title="search" type="submit" value="Search">
        </td>
      </tr>
    </tbody>
  </table>
</form>

Text Content

PHILOSOPHICAL DISQUISITIONS

Things hid and barr'd from common sense




PAGES

 * Home
 * Book
 * About
 * Podcast
 * Best Of
 * Papers
 * Media
 * Newsletter






TUESDAY, OCTOBER 10, 2023


TITE 3 - VALUE ALIGNMENT AND THE CONTROL PROBLEM






In this episode, John and Sven discuss risk and technology ethics. They focus,
in particular, on the perennially popular and widely discussed problems of value
alignment (how to get technology to align with our values) and control (making
sure technology doesn't do something terrible). They start the conversation with
the famous case study of Stanislov Petrov and the prevention of nuclear war.

You can listen below or download the episode here. You can also subscribe to the
podcast on Apple, Spotify, Google, Amazon and a range of other podcasting
services.





RECOMMENDATIONS FOR FURTHER READING

   
 * Atoosa Kasirzadeh and Iason Gabriel, 'In Conversation with AI: Aligning
   Language Models with Human Values'
   
   
 * Nick Bostrom, relevant chapters from Superintelligence
   
   
 * Stuart Russell, Human Compatible
   
   
 * Langdon Winner, 'Do Artifacts Have Politics?'
   
   
 * Iason Gabriel, 'Artificial Intelligence, Values and Alignment'
   
   
 * Brian Christian, The Alignment Problem
   


DISCOUNT

You can purchase a 20% discounted copy of This is Technology Ethics by using the
code TEC20 at the publisher's website.

Subscribe to the newsletter



Posted by John Danaher at 11:58 AM 2 comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: Podcast



FRIDAY, SEPTEMBER 29, 2023


TITE 2: THE METHODS OF TECHNOLOGY ETHICS




In this episode, John and Sven discuss the methods of technology ethics. What
exactly is it that technology ethicists do? How can they answer the core
questions about the value of technology and our moral response to it? Should
they consult their intuitions? Run experiments? Use formal theories? The
possible answers to these questions are considered with a specific case study on
the ethics of self-driving cars.



You can listen below or download the episode here. You can also subscribe to the
podcast on Apple, Spotify, Google, Amazon and a range of other podcasting
services.





RECOMMENDED READING

   
 * Peter Königs 'Of Trolleys and Self-Driving Cars:What machine ethicists can
   and cannot learn from trolleyology'
   
   
 * John Harris 'The Immoral Machine'
   
   
 * Edmond Awad et al 'The Moral Machine Experiment'
   

Discount

You can purchase a 20% discounted copy of This is Technology Ethics by using the
code TEC20 at the publisher's website.




Subscribe to the newsletter




Posted by John Danaher at 10:54 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: Podcast



MONDAY, SEPTEMBER 25, 2023


NEW PODCAST SERIES - 'THIS IS TECHNOLOGY ETHICS'






I am very excited to announce the launch of a new podcast series with my
longtime friend and collaborator Sven Nyholm. The podcast is intended to
introduce key themes, concepts, arguments and ideas arising from the ethics of
technology. It roughly follows the structure from the book This is Technology
Ethics by Sven , but in a loose and conversational style. In the nine episodes,
we will cover the nature of technology and ethics, the methods of technology
ethics, the problems of control, responsibility, agency and behaviour change
that are central to many contemporary debates about the ethics of technology. We
will also cover perennially popular topics such as whether a machine could have
moral status, whether a robot could (or should) be a friend, lover or work
colleague, and the desirability of merging with machines. The podcast is
intended to be accessible to a wide audience and could provide an ideal
companion to an introductory or advanced course in the ethics of technology
(with particular focus on AI, robotics and other digital technologies).

I will be releasing the podcast on the Philosophical Disquisitions podcast feed,
but I have also created an independent podcast feed and website, if you are just
interested in it. The first episode can be downloaded here or you can listen
below. You can also subscribe on Apple, Spotify, Amazon and a range of other
podcasting services.






If you go the website or subscribe via the standalone feed, you can download the
first two episodes now. There is also a promotional tie with the book publisher.
If you use the code 'TEC20' on the publisher's website (here) you can get 20%
off the regular price. 

Subscribe to the newsletter



Posted by John Danaher at 12:40 PM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: Podcast



FRIDAY, JUNE 9, 2023


THE ETHICS OF ACADEMIA (PODCAST SERIES)









About a year ago, I put together a series of podcasts called 'The Ethics of
Academia'. The purpose of the podcast was to explore the ethical dilemmas facing
academics in their work as researchers, teachers and (to a slightly lesser
extent) administrators/leaders. Here are the links to all 12 episodes, along
with brief descriptions of their content. You can subscribe/download the full
set of episodes on Apple or Spotify or Amazon or Google a range of other
services. 






 * 1 - Sven Nyholm and the Division of Labour: A wide-ranging conversation with
   Sven Nyholm (now Professor of the Ethics of AI at University of Munich) in
   which he reflects, in particular, on the ethical importance of the division
   of labour in academia (among many other topics)



 * 2 - Michael Cholbi on Being Answerable to Humankind: Interview with Michael
   Cholbi, Professor of Philosophy at the University of Edinburgh. We reflect on
   the value of applied ethical research and the right approach to teaching.
   Michael has thought quite a lot about the ethics of work, in general, and the
   ethics of teaching and grading in particular. So those become central themes
   in our conversation



 * 3 - Regina Rini and the Value of Speaking to the Public: Interview with
   Regina Rini, Canada Research Chair at York University in Toronto. Regina has
   a background in neuroscience and cognitive science but now works primarily in
   moral philosophy. She has the distinction of writing a lot of philosophy for
   the public through her columns for the Time Literary Supplement and the value
   of this public writing becomes a major theme of our conversation.



 * 4 - Justin Weinberg on the State of Philosophy: Interview with Justin
   Weinberg, Associate Professor of Philosophy at the University of South
   Carolina. Justin researches ethical and social philosophy, as well as
   metaphilosophy. He is also the editor of the popular Daily Nous blog and has,
   as a result, developed an interest in many of the moral dimensions of
   philosophical academia. As a result, our conversation traverses a wide
   territory, from the purpose of research to the ethics of grading.



 * 5 - Brian Earp on Connecting Research to the Real World: Interview with Brian
   Earp, Senior Research Fellow with the Uehiro Centre for Practical Ethics in
   Oxford. He is a prolific researcher and writer in psychology and applied
   ethics. We talk about how Brian ended up where he is, the value of applied
   research, and the importance of connecting research to the real world.



 * 6 - Helen de Cruz on Prestige Bias and the Duty to Review: Interview with
   Helen de Cruz, Danforth Chair in Humanities at the University of St. Louis.
   Helen researches the philosophy of belief formation, but also does a lot of
   professional and public outreach, writes science fiction, and is a very
   talented illustrator/artist. We talk about the ethics of research, teaching,
   public outreach and professional courtesy. Some of the particular highlights
   from the conversation are her thoughts on prestige bias in academia and the
   crisis of peer reviewing.



 * 7 - Aaron Rabinowitz on the Pedagogy of Moral Luck: Interview with Aaron
   Rabinowitz, veteran podcaster and philosopher. He is currently doing a PhD in
   the philosophy of education at Rutgers University. He is particularly
   interested in the problem of moral luck and how it should affect our approach
   to education. So that's what we talk about. 



 * 8 - Zena Hitz on Great Books and the value of learning: Interview with Zena
   Hitz, currently a tutor at St John’s College. She is a classicist and author
   of the book Lost in Thought. We talk about losing faith in academia, the
   dubious value of scholarship, the importance of learning, and the risks
   inherent in teaching. I learned a lot from Zena and found her perspective on
   the role of academics and educators to be enlightening.



 * 9 - Jason Brennan on the Moral Mess of Higher Education: Interview with Jason
   Brennan, Professor of Strategy, Economics, Ethics, and Public Policy at the
   McDonough School of Business at Georgetown University. Jason has written
   quite a bit about the moral failures and conundrums of higher education,
   which makes him an ideal guest for this podcast. We talk about the purpose of
   research, the ethics of (excess?) scholarly productivity, the problem with
   PhD programmes and the plight of adjuncts.



 * 10 - Jesse Stommel on the Philosophy of Ungrading: Is grading unethical?
   Coercive and competitive? Should we replace grading with something else? In
   this podcast I chat to Jesse Stommel, one of the foremost proponents of
   ‘ungrading’. Jesse is a faculty member of the writing program at the
   University of Denver. We talk about the problem with traditional grading
   systems, the idea of ungrading, and how to create communities of respect in
   the classroom.



 * 11 - Jessica Flanigan on Gadflies and Critical Thinking: Interview with
   Jessica Flanigan, Professor of Leadership Ethics at the University of
   Richmond. We talk about the value of philosophical research, whether
   philosophers should emulate Socrates, and how to create good critical
   discussions in the classroom. I particularly enjoyed Jessica’s thoughts about
   effective teaching and I think everyone can learn something from them.



 * 12 - Olle Häggström on Romantics vs Vulgarists in Scientific
   Research: Interview with Olle Haggstrom, a professor of mathematical
   statistics at Chalmers University of Technology in Sweden. Having spent the
   first half of his academic life focused largely on pure mathematical
   research, Olle has shifted in recent years to consider how research can
   benefit humanity and how some research might be too risky to pursue. We have
   a detailed conversation about the ethics of research and contrast different
   ideals of what it means to be a scientist in the modern age. 










Subscribe to the newsletter



Posted by John Danaher at 9:48 AM 3 comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest




TUESDAY, JUNE 6, 2023


110 - CAN WE PAUSE AI DEVELOPMENT? EVIDENCE FROM THE HISTORY OF TECHNOLOGICAL
RESTRAINT



In this episode, I chat to Matthijs Maas about pausing AI development. Matthijs
is currently a Senior Research Fellow at the Legal Priorities Project and a
Research Affiliate at the Centre for the Study of Existential Risk at the
University of Cambridge. In our conversation, we focus on the possibility of
slowing down or limiting the development of technology. Many people are
sceptical of this possibility but Matthijs has been doing some extensive
research of historical case studies of, apparently successful, technological
slowdown. We discuss these case studies in some detail.

You can download the episode here or listen below. You can also subscribe the
podcast on Apple, Spotify, Google, Amazon or whatever your preferred service
might be.


RELEVANT LINKS

 * Recording of Matthijs's Chalmers about this
   topic: https://www.youtube.com/watch?v=vn4ADfyrJ0Y&t=2s 
 * Slides from this talk
   -- https://drive.google.com/file/d/1J9RW49IgSAnaBHr3-lJG9ZOi8ZsOuEhi/view?usp=share_link
 * Previous essay / primer, laying out the basics of the
   argument:  https://verfassungsblog.de/paths-untaken/
 * Incomplete longlist database of candidate case
   studies: https://airtable.com/shrVHVYqGnmAyEGsz
   
   
   







Subscribe to the newsletter



Posted by John Danaher at 7:07 PM 1 comment:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: Podcast



THURSDAY, JUNE 1, 2023


MECHANISMS OF TECHNO-MORAL CHANGE: A TAXONOMY AND OVERVIEW






I just published a new paper with my co-author Henrik Skaug Sætra. It's about
the ways in which technology can alter our moral beliefs and practices. Many
people study the phenomenon of techno-moral change but, to some extent, the
existing literature is fragmented and heterogeneous - lots of case studies and
examples but not enough theoretical unity. The goal of this paper is to bring
some order to existing discussions by proposing a taxonomy of mechanisms of
techno-moral change. We argue that there are six primary mechanisms through
which technology can alter moral beliefs and practices and that these slot into
three main categories (decisional, relational, perceptual). More details in the
abstract below. The table, pictured above, summarises the key ideas in the
paper. The full paper is available open access at the link provided.

> Title: Mechanisms of Techno-Moral Change: A Taxonomy and Overview

> Links: Official (free OA); Researchgate; Philpapers

> Abstract: The idea that technologies can change moral beliefs and practices is
> an old one. But how, exactly, does this happen? This paper builds on an
> emerging field of inquiry by developing a synoptic taxonomy of the mechanisms
> of techno-moral change. It argues that technology affects moral beliefs and
> practices in three main domains: decisional (how we make morally loaded
> decisions), relational (how we relate to others) and perceptual (how we
> perceive situations). It argues that across these three domains there are six
> primary mechanisms of techno-moral change: (i) adding options; (ii) changing
> decision-making costs; (iii) enabling new relationships; (iv) changing the
> burdens and expectations within relationships; (v) changing the balance of
> power in relationships; and (vi) changing perception (information, mental
> models and metaphors). The paper also discusses the layered, interactive and
> second-order effects of these mechanisms.




 

Subscribe to the newsletter



Posted by John Danaher at 8:53 AM 1 comment:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest




TUESDAY, MAY 30, 2023


109 - HOW CAN WE ALIGN LANGUAGE MODELS LIKE GPT WITH HUMAN VALUES?









In this episode of the podcast I chat to Atoosa Kasirzadeh. Atoosa is an
Assistant Professor/Chancellor's fellow at the University of Edinburgh. She is
also the Director of Research at the Centre for Technomoral Futures at
Edinburgh. We chat about the alignment problem in AI development, roughly: how
do we ensure that AI acts in a way that is consistent with human values. We
focus, in particular, on the alignment problem for language models such as
ChatGPT, Bard and Claude, and how some old ideas from the philosophy of language
could help us to address this problem.

You can download the episode here or listen below. You can also subscribe the
podcast on Apple, Spotify, Google, Amazon or whatever your preferred service
might be.





RELEVANT LINKS

 * Atoosa's webpage

 * Atoosa's paper (with Iason Gabriel) 'In Conversation with AI: Aligning
   Language Models with Human Values'







Subscribe to the newsletter



Posted by John Danaher at 11:19 AM 2 comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: Podcast

Older Posts Home

Subscribe to: Posts (Atom)

Sign-up for the Newsletter





BOOK


AUTOMATION AND UTOPIA IS NOW AVAILABLE!

[ Amazon.com ] [ Amazon.co.uk ] [ Book Depository ] [ Harvard UP ] [ Indiebound
] [ Google Play] "Armed with an astonishing br...




FOLLOW ME!

 * On Twitter
 * On Philpapers
 * On ResearchGate
 * On Academia.edu
 * On Facebook




SUBSCRIBE TO

Posts
Atom

Posts

All Comments
Atom

All Comments





SEARCH THIS BLOG






TOTAL PAGEVIEWS

6,723,047



POPULAR POSTS

 * Mill's Argument for Free Speech: A Guide
   
 * Understanding Legal Argument (1): The Five Types of Argument
   
 * Understanding Nihilism: What if nothing matters?
   
 * Understanding Ideologies: Liberalism, Socialism and Conservatism
   
 * Understanding the Experience Machine Argument
   




BLOG ARCHIVE

 * ▼  2023 (21)
   * ▼  October (1)
     * TITE 3 - Value Alignment and the Control Problem
   * ►  September (2)
   * ►  June (3)
   * ►  May (5)
   * ►  April (5)
   * ►  March (4)
   * ►  February (1)

 * ►  2022 (36)
   * ►  December (1)
   * ►  November (7)
   * ►  October (1)
   * ►  September (3)
   * ►  August (5)
   * ►  July (5)
   * ►  June (5)
   * ►  May (2)
   * ►  April (3)
   * ►  March (2)
   * ►  February (2)

 * ►  2021 (36)
   * ►  December (1)
   * ►  November (5)
   * ►  July (3)
   * ►  June (5)
   * ►  May (4)
   * ►  April (5)
   * ►  March (5)
   * ►  February (3)
   * ►  January (5)

 * ►  2020 (49)
   * ►  December (3)
   * ►  November (4)
   * ►  October (5)
   * ►  September (2)
   * ►  August (2)
   * ►  July (5)
   * ►  June (2)
   * ►  May (1)
   * ►  April (9)
   * ►  March (6)
   * ►  February (4)
   * ►  January (6)

 * ►  2019 (78)
   * ►  December (7)
   * ►  November (7)
   * ►  October (5)
   * ►  September (6)
   * ►  August (8)
   * ►  July (4)
   * ►  June (6)
   * ►  May (6)
   * ►  April (8)
   * ►  March (8)
   * ►  February (6)
   * ►  January (7)

 * ►  2018 (76)
   * ►  December (9)
   * ►  November (5)
   * ►  October (7)
   * ►  September (7)
   * ►  August (6)
   * ►  July (5)
   * ►  June (5)
   * ►  May (5)
   * ►  April (5)
   * ►  March (8)
   * ►  February (5)
   * ►  January (9)

 * ►  2017 (91)
   * ►  December (12)
   * ►  November (5)
   * ►  October (8)
   * ►  September (4)
   * ►  August (6)
   * ►  July (8)
   * ►  June (7)
   * ►  May (11)
   * ►  April (6)
   * ►  March (9)
   * ►  February (7)
   * ►  January (8)

 * ►  2016 (100)
   * ►  December (11)
   * ►  November (10)
   * ►  October (5)
   * ►  September (8)
   * ►  August (5)
   * ►  July (10)
   * ►  June (12)
   * ►  May (9)
   * ►  April (4)
   * ►  March (11)
   * ►  February (6)
   * ►  January (9)

 * ►  2015 (100)
   * ►  December (13)
   * ►  November (8)
   * ►  October (9)
   * ►  September (7)
   * ►  August (10)
   * ►  July (10)
   * ►  June (8)
   * ►  May (7)
   * ►  April (8)
   * ►  March (4)
   * ►  February (7)
   * ►  January (9)

 * ►  2014 (118)
   * ►  December (9)
   * ►  November (8)
   * ►  October (8)
   * ►  September (11)
   * ►  August (9)
   * ►  July (19)
   * ►  June (4)
   * ►  May (9)
   * ►  April (14)
   * ►  March (8)
   * ►  February (5)
   * ►  January (14)

 * ►  2013 (100)
   * ►  December (16)
   * ►  November (5)
   * ►  October (7)
   * ►  September (8)
   * ►  August (10)
   * ►  July (8)
   * ►  June (6)
   * ►  May (5)
   * ►  April (10)
   * ►  March (9)
   * ►  February (7)
   * ►  January (9)

 * ►  2012 (100)
   * ►  December (10)
   * ►  November (6)
   * ►  October (7)
   * ►  September (8)
   * ►  August (9)
   * ►  July (12)
   * ►  June (9)
   * ►  May (9)
   * ►  April (7)
   * ►  March (9)
   * ►  February (3)
   * ►  January (11)

 * ►  2011 (133)
   * ►  December (8)
   * ►  November (12)
   * ►  October (11)
   * ►  September (11)
   * ►  August (1)
   * ►  July (8)
   * ►  June (16)
   * ►  May (29)
   * ►  April (15)
   * ►  March (7)
   * ►  February (9)
   * ►  January (6)

 * ►  2010 (188)
   * ►  December (8)
   * ►  November (2)
   * ►  October (19)
   * ►  September (12)
   * ►  August (15)
   * ►  July (18)
   * ►  June (9)
   * ►  May (14)
   * ►  April (19)
   * ►  March (17)
   * ►  February (11)
   * ►  January (44)

 * ►  2009 (31)
   * ►  December (31)




ABOUT ME

John Danaher I like to imagine, navigate and analyse the future of humanity.
View my complete profile


This work by John Danaher is licensed under a Creative Commons
Attribution-NonCommercial-NoDerivs 3.0 Unported License.





CC LICENCE


This work by John Danaher is licensed under a Creative Commons
Attribution-NonCommercial-NoDerivs 3.0 Unported License.



Simple theme. Powered by Blogger.



Diese Website verwendet Cookies von Google, um Dienste anzubieten und Zugriffe
zu analysieren. Deine IP-Adresse und dein User-Agent werden zusammen mit
Messwerten zur Leistung und Sicherheit für Google freigegeben. So können
Nutzungsstatistiken generiert, Missbrauchsfälle erkannt und behoben und die
Qualität des Dienstes gewährleistet werden.Weitere InformationenOk