www.lesswrong.com Open in urlscan Pro
3.219.92.161  Public Scan

Submitted URL: https://lesswrong.com/
Effective URL: https://www.lesswrong.com/
Submission Tags: analytics-framework
Submission: On April 26 via api from US — Scanned from DE

Form analysis 1 forms found in the DOM

<form class="WrappedLoginForm-root"><input type="text" value="" name="email" placeholder="email" class="WrappedLoginForm-input"><input type="text" value="" name="username" placeholder="username" class="WrappedLoginForm-input"><input type="password"
    value="" name="password" placeholder="create password" class="WrappedLoginForm-input"><input type="submit" class="WrappedLoginForm-submit" value="Sign Up">
  <div class="SignupSubscribeToCurated-root"><span class="MuiButtonBase-root MuiIconButton-root MuiSwitchBase-root MuiCheckbox-root MuiCheckbox-colorSecondary MuiSwitchBase-checked MuiCheckbox-checked SignupSubscribeToCurated-checkbox"><span
        class="MuiIconButton-label"><svg class="MuiSvgIcon-root" focusable="false" viewBox="0 0 24 24" aria-hidden="true" role="presentation">
          <path d="M19 3H5c-1.11 0-2 .9-2 2v14c0 1.1.89 2 2 2h14c1.11 0 2-.9 2-2V5c0-1.1-.89-2-2-2zm-9 14l-5-5 1.41-1.41L10 14.17l7.59-7.59L19 8l-9 9z"></path>
        </svg><input type="checkbox" checked="" class="MuiSwitchBase-input" data-indeterminate="false" value=""></span><span class="MuiTouchRipple-root"></span></span>Subscribe to <!-- -->Curated posts<svg
      class="MuiSvgIcon-root SignupSubscribeToCurated-infoIcon" focusable="false" viewBox="0 0 24 24" aria-hidden="true" role="presentation" title="Emails 2-3 times per week with the best posts, chosen by the LessWrong moderation team.">
      <path fill="none" d="M0 0h24v24H0z"></path>
      <path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm1 15h-2v-6h2v6zm0-8h-2V7h2v2z"></path>
    </svg></div>
  <div class="WrappedLoginForm-options"><span class="WrappedLoginForm-toggle"> Log In </span><span class="WrappedLoginForm-toggle"> Reset Password </span></div>
  <div class="WrappedLoginForm-oAuthComment">...or continue with</div>
  <div class="WrappedLoginForm-oAuthBlock">
    <a class="WrappedLoginForm-oAuthLink" href="/auth/facebook?returnTo=/">FACEBOOK</a><a class="WrappedLoginForm-oAuthLink" href="/auth/google?returnTo=/">GOOGLE</a><a class="WrappedLoginForm-oAuthLink" href="/auth/github?returnTo=/">GITHUB</a>
  </div>
</form>

Text Content

This website requires javascript to properly function. Consider activating
javascript to get access to all site functionality.
LESSWRONG
LW

Login

HomeAll PostsConceptsLibrary
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Best Of
Community Events
Zuzalu

Fri Mar 24•Tivat
RaD-AI workshop

Tue May 30•Greater London
Argentines LW/SSC/EA/MIRIx - Call to All

Tue Apr 18•Online
Discuss AI Policy Recommendations

Wed Apr 26•Toronto

Subscribe (RSS/Email)
About
FAQ
HomeAll PostsConceptsLibraryCommunity



RECOMMENDATIONS


Fake Beliefs

If there’s a foundational skill in the martial art of rationality, a mental
stance on which all other technique rests, it might be: the ability to spot,
inside your own head, psychological signs that you have a mental map of
something, and signs that you don’t...

First Post: Making Beliefs Pay Rent (in Anticipated Experiences)




374Welcome to LessWrong!
Ruby, Raemon, RobertM, habryka
4y



51



718Eight Short Studies On Excuses
Scott Alexander
13y



246



127A stylized dialogue on John Wentworth's claims about markets and
optimizationΩ
So8res
1d


Ω
19



229On AutoGPT
Zvi
5d



37





LATEST POSTS

Customize Feed (Hide)Customize Feed (Show)
Rationality+Rationality+World Modeling+World Modeling+AIAIWorld
OptimizationWorld OptimizationPracticalPracticalCommunityCommunity
Personal Blog+

184Mental Health and the Alignment Problem: A Compilation of Resources (updated
April 2023)
Chris Scammell, DivineMango
13h



17



172My Assessment of the Chinese AI Safety Community
Lao Mein
1d



32



27How Many Bits Of Optimization Can One Bit Of Observation Unlock?QΩ
johnswentworth
5h


QΩ
1



129The Brain is Not Close to Thermodynamic Limits on Computation
DaemonicSigil
2d



42



27Exploring the Lottery Ticket Hypothesis
Rauno Arike
9h



3



100Deep learning models might be secretly (almost) linear
beren
1d



16



38Briefly how I've updated since ChatGPT
rime
15h



2



63The Toxoplasma of AGI Doom and Capabilities?
Robert_AIZI
1d



10



99Contra Yudkowsky on AI Doom
jacob_cannell
2d



94



114Could a superintelligence deduce general relativity from a falling apple? An
investigation
titotal
3d



33



65AGI ruin mostly rests on strong claims about alignment and deployment, not
about society
Rob Bensinger
2d



6



19AI Safety Newsletter #3: AI policy proposals and a new challenger approaches
Oliver Zhang
13h



0



22Notes on Potential Future AI Tax Policy
Zvi
16h



1



Load MoreAdvanced Sorting/Filtering


RECENT DISCUSSION


Contra Yudkowsky on AI Doom

99
jacob_cannell
Object-Level AI Risk SkepticismAI
Frontpage
2d

Eliezer Yudkowsky predicts doom from AI: that humanity faces likely extinction
in the near future (years or decades) from a rogue unaligned superintelligent AI
system. Moreover he predicts that this is the default outcome, and AI alignment
is so incredibly difficult that even he failed to solve it.

EY is an entertaining and skilled writer, but do not confuse rhetorical writing
talent for depth and breadth of technical knowledge. I do not have EY's talents
there, or Scott Alexander's poetic powers of prose. My skill points instead have
gone near exclusively towards extensive study of neuroscience, deep learning,
and graphics/GPU programming. More than most, I actually have the depth and
breadth of technical knowledge necessary to evaluate these claims in detail.

I have evaluated this...

(Continue Reading – 2483 more words)
jacob_cannell7m20

So I assumed a specific relationship between "one unit of human-brain power",
and "super intelligence capable of killing humanity", where I use human-brain
power as a unit but that doesn't actually have to be linear scaling - imagine
this is a graph with two labeled data points, with a point at (human, X:1) and
then another point at (SI, X:10B), you can draw many different curves that
connect those two labeled points and the Y axis is sort of arbitrary.

Now maybe 10B HBP to kill humanity seems too high, but I assume humanity as a
civilization which includes a ton of other compute, AI, and AGI, and I don't
really put much credence in strong nanotech.

Reply

3jrincayc5h
Hm, neuron impulses travel at around 200 m/s, electric signals travel at around
2e8 m/s, so I think electronics have an advantage there. (I agree that you may
have a point with "That Alien Mindspace".)

2jacob_cannell3h
The brain's slow speed seems mostly for energy efficiency
[https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know#Speed]
but it is also closely tuned to brain size such that signal delay is not a
significant problem.

1berglund9h
I see, thanks for clarifying.
Moderation notes re: recent Said/Duncan threads

46
Raemon, Raemon
Demon ThreadsLW ModerationModeration (topic)CommunitySite Meta
Personal Blog
11d

Update: Ruby and I have posted moderator notices for Duncan and Said in this
thread. This was a set of fairly difficult moderation calls on established users
and it seems good for the LessWrong userbase to have the opportunity to evaluate
it and respond. I'm stickying this post for a day-or-so.

 

Recently there's been a series of posts and comment back-and-forth between Said
Achmiz and Duncan Sabien, which escalated enough that it seemed like site
moderators should weigh in.

For context, a quick recap of recent relevant events as I'm aware of them are.
(I'm glossing over many details that are relevant but getting everything exactly
right is tricky)

 1. Duncan posts Basics of Rationalist Discourse. Said writes some comments in
    response. 
 2. Zack posts "Rationalist Discourse" Is Like "Physicist Motors", which Duncan

...
(See More – 437 more words)
Said Achmiz11m20

> The claim I understand Ray to be making is that he believes you gave a false
> account of the site-wide norms about what users are obligated to do

Is that really the claim? I must object to it, if that’s so. I don’t think I’ve
ever made any false claims about what social norms obtain on Less Wrong (and to
the extent that some of my comments were interpreted that way, I was quick to
clearly correct that misinterpretation).

Certainly the “normatively correct general principles” comment didn’t contain
any such false claims. (And Raemon does not seem to be clai... (read more)

Reply

2lsusr26m
One solution is to limit the number of banned users to a small fraction of
overall commentors. I've written 297 posts so far and have banned only 3 users
from commenting on them. (I did not ban Duncan or Said.) My highest-quality
criticism comes from users who I have never even considered banning. Their
comments are consistently well-reasoned and factually correct.

2Said Achmiz40m
Please note, my point in linking that comment wasn’t to suggest that the things
Benquo wrote are necessarily true and that the purported truth of those
assertions, in itself, bears on the current situation. (Certainly I do agree
with what he wrote—but then, I would, wouldn’t I?) Rather, I was making a
meta-level point. Namely: your thesis is that there is some behavior on my part
which is bad, and that what makes it bad is that it makes post authors feel… bad
in some way (“attacked”? “annoyed”? “discouraged”? I couldn’t say what the right
adjective is, here), and that as a consequence, they stop posting on Less Wrong.
And as the primary example of this purported bad behavior, you linked the
discussion in the comments of the “Zetetic Explanation” post by Benquo (which
resulted in the mod warning you noted). But the comment which I linked has
Benquo writing, mere months afterward, that the sort of
critique/objection/commentary which I write (including the sort which I wrote in
response to his aforesaid post) is “helpful and important”, “very important to
the success of an epistemic community”, etc. (Which, I must note, is
tremendously to Benquo’s credit. I have the greatest respect for anyone who can
view, and treat, their sometime critics in such a fair-minded way.) This seems
like very much the opposite of leaving Less Wrong as a result of my commenting
style. It seems to me that when the prime example you provide of my
participation in discussions on Less Wrong purportedly being the sort of thing
that drive authors away, actually turns out to be an example of exactly the
opposite—of an author (whose post I criticized, in somewhat harsh terms) fairly
soon (months) thereafter saying that my critical comments are good and important
to the community and that I should continue… … well, then regardless of whether
you agree with the author in question about whether or not my comments are
good/important/whatever, the fact that he holds this view casts very serious dou

2Duncan_Sabien15m
Said is asking Ray, not me, but I strongly disagree. Point 1 is that a black
raven is not strong evidence against white ravens. (Said knows this, I think.)
Point 2 is that a behavior which displeases many authors can still be pleasant
or valuable to some authors. (Said knows this, I think.) Point 3 is that
benquo's view on even that specific comment is not the only author-view that
matters; benquo eventually being like "this critical feedback was great" does
not mean that other authors watching the interaction at the time did not feel
"ugh, I sure don't want to write a post and have to deal with comments like this
one." (Said knows this, I think.) (Notably, benquo once publicly stated that he
suspected a rough interaction would likely have gone much better under Duncan
moderation norms specifically; if we're updating on benquo's endorsements then
it comes out to "both sets of norms useful," presumably for different things.)
I'd say it casts mild doubt on the thesis, at best, and that the most likely
resolution is that Ray ends up feeling something like "yeah, fair, this did not
turn out to be the best example," not "oh snap, you're right, turns out it was
all a house of cards." (This will be my only comment in this chain, so as to
avoid repeating past cycles.)
Should LW have an official list of norms?

40
Ruby
Site MetaCommunity
Personal Blog
8h

To get this written and shared quickly, I haven't polished it much and the
English/explanation is a little rough. Seemed like the right tradeoff though.

Recently, a few users have written their sense of norms for rationalist
discourse, i.e. Basics of Rationalist Discourse and Elements of Rationalist
Discourse. There've been a few calls to adopt something like these as site norms
for LessWrong.

Doing so seems like it'd provide at least the following benefits:

 * It's a great onboarding tool for new users to help them understand the site's
   expectations and what sets it apart from other forums
 * It provided a recognized standard that both moderators and other users can
   point to and uphold, e.g. by pointing out instances where someone is failing
   to live up to one of the norms
 * Having it

...
(Continue Reading – 1223 more words)
Zack_M_Davis15m4-2

I think the last three months are a pretty definitive demonstration that talking
about "norms" is toxic and we should almost never do it. I'm not interested, at
all, in "norms." (The two posts I wrote about them were "defensive" in nature,
arguing that one proposed norm was bad as stated, and expressing skepticism
about the project of norms lists.)

I'm intested in probability theory, decision theory, psychology, math, and AI.
Let's talk about those things, not "norms." If anyone dislikes a comment about
probability theory, decision theory, psychology, math,... (read more)

Reply

2the gears to ascension2h
very insightful, but something sets me on edge that truthseeking is being
compared by central example to trying to hurt other humans. My intuition is that
that will leak unhealthy metaphor, but I also don't explicitly see how it would
do so and therefore can't currently give more detail. (this may have something
to do with my waking up with a headache.)

4gilch2h
I suppose there are a lot more Void metaphors in the Tao Te Ching that we could
borrow instead, although maybe not all of them are as apt. Yudkowsky likened
rationality to a martial art in the Sequences. It's along the same theme as the
rest of that. Martial arts are centered around fighting, which can involve
hurting other humans, but more as a pragmatic means to an end rather than, say,
torture.

4gilch2h
This, TBH. Maybe also the Litany of Tarski points at the same thing. I feel like
that's the wording that left the deepest impression on me, at least on the
epistemic side. "Rationalists should win," I think did it for me on the
instrumental side, although I'm afraid that one is especially prone to
misinterpretation as tribalism, rather than as the Void of decision theory as
originally intended. I would be very worried about the effects of enshrining
norms in a list. Like, we have implicit norms anyway. It's not like we can
choose not to have them, but trying to cement them might easily get them wrong
and make it harder to evolve them as our collective knowledge improves. I can
perhaps see the desire to protect our culture from the influx of new users in
this way, but I think there are probably better approaches. Like maybe we could
call them "training wheels" or "beginner suggestions" instead of "norms". I also
like the idea of techniques of discourse engaged in by mutual consent. We don't
always have to use the same mode. Examples are things like Crocker's Rules,
Double Crux, Prediction Markets, Bets, Street Epistemology, and (I suppose) the
traditional debate format. Maybe you can think of others. I think it would be
more productive to explore and teach techniques like these rather than picking
any one style as "normal". We'd use the most appropriate tool for the job at
hand.
Sinclair Chen's Shortform

Sinclair Chen
11d

Sinclair Chen18m10

despite the challenge, I still think being a founder or early employee is
incredibly awesome

coding, product, design, marketing, really all kinds of building for a user - is
the ultimate test.
it's empirical, challenging, uncertain, tactical, and very real.

if you succeeds, you make something self-sustaining that continues to do good.
if you fail, it will do bad. and/or die.

and no one will save you.

Reply

2Sinclair Chen12h
Moderating lightly is harder than moderating harshly. Walled gardens are easier
than creating a community garden of many mini walled gardens. Platforms are
harder than blogs. Free speech is more expensive than unfree speech. Creating a
space for talk is harder than talking. The law in the code and the design is
more robust than the law in the user's head yet the former is much harder to
build.

1Sinclair Chen12h
Moderation is hard yo Y'all had to read through pyramids of doom containing
forum drama last week. Or maybe, like me, you found it too exhausting and tried
to ignore it. Yesterday Manifold made more than $30,000, off a single whale
betting in a self-referential market designed like a dollar auction, and also
designed to get a lot of traders. It's the biggest market yet, 2200 comments,
mostly people chanting for their team. Incidentally parts of the site went down
for a bit. I'm tired. I'm no longer as worried about series A. But also ... this
isn't what I wanted to build. The rest of the team kinda feels this way too. So
does the whale in question. Once upon a time, someone at a lw meetup asked me,
half joking, that I please never build a social media site.

1Sinclair Chen32m
Update: Monetary policy is hard yo Isaac King ended up regretting his mana
purchase a lot after it started to become clear that he was losing in the whales
vs minnows market. We ended up refunding most of his purchase (and deducting his
mana accordingly, bringing his manifold account deeply negative). Effectively,
we're bailing him out and eating the mana inflation :/ Aside: I'm somewhat glad
my rant here has not gotten much upvotes/downvotes ... it probably means the
meme war and the spammy "minnow" recruitment calls hasn't reached here much,
fortunately...
Exploring the Lottery Ticket Hypothesis

27
Rauno Arike
Lottery Ticket HypothesisAI
Frontpage
9h

I have recently been fascinated by the breadth of important mysteries in deep
learning, including deep double descent and phase changes, that could be
explained by a curious conjectured property of neural networks called the
lottery ticket hypothesis. Despite this explanatory potential, however, I
haven't seen much discussion about the evidence behind and the implications of
this hypothesis in the alignment community. Being confused about these things
motivated me to conduct my own survey of the phenomenon, which resulted in this
post.


THE LOTTERY TICKET HYPOTHESIS, EXPLAINED IN ONE MINUTE

The lottery ticket hypothesis (LTH) was originally proposed in a paper by
Frankle and Carbin (2018):

> A randomly-initialized, dense neural network contains a subnetwork that is
> initialized such that—when trained in isolation—it can match the test accuracy
> of the original

...
(Continue Reading – 3201 more words)
jacob_cannell23m20

Consider first the more basic question: why is simple SGD on over-parameterized
ANNs an effective global optimizer? This is the first great mystery of ANNs from
classical ML theory: they should get stuck in various local minima and or
overfit, but generally they don't (with a few tweaks) and just work better and
better with scale. Many other techniques generally don't have this property.

A large oversized ANN can encode not just a single circuit solution, but an
entire ensemble of candidates circuits (which dropout makes more explicit), and
SGD then explo... (read more)

Reply

5johnswentworth8h
Note that this has changed over time, as network architectures change; I doubt
that it applies to e.g. the latest LLMs. The thing about pruning doing a whole
bunch of optimization does still apply independent of whether net training is
linear-ish (though I don't know if anyone's repro'd the lottery ticket
hypothesis-driven pruning experiments on the past couple years' worth of LLMs).

8Zach Furman3h
A bit of a side note, but I don't even think you need to appeal to new
architectures - it looks like the NTK approximation performs substantially worse
even with just regular MLPs (see this paper,
[https://arxiv.org/pdf/2106.06770.pdf] among others).
My Assessment of the Chinese AI Safety Community

172
Lao Mein
ChinaAI GovernanceAI RiskAI
Frontpage
1d

I've heard people be somewhat optimistic about this AI guideline from China.
They think that this means Beijing is willing to participate in an AI
disarmament treaty due to concerns over AI risk. Eliezer noted that China is
where the US was a decade ago in regards to AI safety awareness, and expresses
genuine hope that his ideas of an AI pause can take place with Chinese buy-in.

I also note that no one expressing these views understands China well. This is a
PR statement. It is a list of feel-good statements that Beijing publishes after
any international event. No one in China is talking about it. They're talking
about how much the Baidu LLM sucks in comparison to ChatGPT. I think most
arguments about how this statement...

(See More – 590 more words)
mic40m20

Relevant: China-related AI safety and governance paths - Career review
(80000hours.org)

Reply

2trevor3h
I just want to note that rationality can fit into the Chinese idea sphere, very
neatly; it's just that it's not effortless to figure out how to make it work. 
The current form e.g. the sequences, is wildly inappropriate. Even worse, a
large proportion of the core ideas would have to be cut out. But if you focus on
things like human intelligence amplification and forecasting and cognitive
biases, it will probably fit into the scene very cleanly. I'm not willing to
give any details, until I can talk with some people and get solid estimates on
the odds of bad outcomes, like the risk that rationality will spread, but AI
safety doesn't, and then the opportunity is lost. The "baggage" thing you
mentioned is worth serious consideration, of course. But I want to clarify that
yes, EA won't fit, but rationality can (if done right, which is not easy but
also not hard), please don't rule it out prematurely.

51428575h
This has already been done [https://hpmor.xyz/hpmor_index/], and has pretty good
reviews [https://book.douban.com/subject/26263536/] and some discussions
[https://www.zhihu.com/question/23875965].  If these are public, could you post
the links to them? Do you know the name of the group, and what kinds of
approaches they are taking toward technical alignment?

3Lao Mein5h
Tian-xia forums are invite-only and mostly expats. I should probably dig deeper
to find native Chinese discussions. CSAGI. Unfortunately, their website
(csagi.org) has been dead for a while. It's founded by Zhu Xiaohu
[https://www.jianshu.com/u/696dc6c6f01c]. He mentioned bisimulation and
reinforcement learning.
To get the best posts emailed to you, create an account! (2-3 posts per week,
selected by the LessWrong moderation team.)
Subscribe to Curated posts
Log In Reset Password
...or continue with
FACEBOOKGOOGLEGITHUB
Fast Minds and Slow Computers

47
jacob_cannell
Whole Brain EmulationNeuroscienceComputer Science
Personal Blog
12y

The long term future may be absurd and difficult to predict in particulars, but
much can happen in the short term.

Engineering itself is the practice of focused short term prediction; optimizing
some small subset of future pattern-space for fun and profit.

Let us then engage in a bit of speculative engineering and consider a potential
near-term route to superhuman AGI that has interesting derived implications.  

Imagine that we had a complete circuit-level understanding of the human brain
(which at least for the repetitive laminar neocortical circuit, is not so far
off) and access to a large R&D budget.  We could then take a
neuromorphic approach.

Intelligence is a massive memory problem.  Consider as a simple example:

> What a cantankerous bucket of defective lizard scabs.

To understand that sentence your brain needs to match it...

(Continue Reading – 1214 more words)

1Archimedes2h
Have the distributed architecture trends and memristor applications followed the
rough path you expected when you wrote this 12 years ago? Is this
[https://arxiv.org/abs/1901.03690] or this
[https://semiengineering.com/von-neumann-is-struggling/] the sort of thing you
were gesturing at? Do you have other links or keywords I could search for?
jacob_cannell2h2

The distributed arch prediction with supercomputers farther ahead was correct -
nvidia grew from a niche gaming company to eclipse intel and is on some road to
stock market dominance all because it puts old parallel supercomputers on single
chips.

Neuromorphic computing in various forms are slowly making progress: there's
IBM's truenorth research chip for example, and a few others. Memristors were
overhyped and crashed, but are still in research and may yet come to be.

So instead we got big GPU clusters, which for the reasons explained in the
article can't... (read more)

Reply
Max Tegmark's new Time article on how we're in a Don't Look Up scenario
[Linkpost]

29
Jonas Hallgren
Public Reactions to AIAI
Personal Blog
14h
This is a linkpost for
https://time.com/6273743/thinking-that-could-doom-us-with-ai/

https://time.com/6273743/thinking-that-could-doom-us-with-ai/

Max Tegmark has posted a Time article on AI Safety and how we're in a "Don't
Look Up" scenario. 

In a similar manner to Yudkowsky, Max went on Lex Fridman and has now posted a
Time article on AI Safety. (I propose we get some more people into this
pipeline)

Max, however, portrays a more palatable view regarding societal standards. With
his reference to Don't Look Up, I think this makes it one of my favourite pieces
to send to people new to AI Risk, as I think it describes everything that your
average joe needs to know quite well. (An asteroid with a 10% risk of killing
humanity is bad)

In terms of general memetics, it will be a lot harder for someone like LeCun to
come up with...

(See More – 87 more words)
Jonathan Yan2h10

For reference, https://aiguide.substack.com/p/do-half-of-ai-researchers-believe
is a recent blog post about the same claim. After fact-checking, the author is
"not convinced" by the survey.

Reply

4Vladimir_Nesov4h
Only 20% of the respondents gave a response to that particular question (thanks
to Denreik for drawing my attention to that fact
[https://www.lesswrong.com/posts/fHbiYJixbgsiLuqsy/on-urgency-priority-and-collective-reaction-to-ai-risks-part?commentId=qn6sgDxssgnQqMT9b],
which I verified). Of the initially contacted 4271 researchers, 738 gave
responses (17% of 4271), and 149 (20% of 738) gave a probability for the
"extremely bad" outcome on the non-trick version of the question (without the
"human inability to control" part).

7habryka8h
The survey seems to have taken reasonable steps to account for responder-bias,
and IIRC at least I couldn't tell any obvious direction in which respondents
were biased. Katja has written some about this here:
https://twitter.com/KatjaGrace/status/1643342692905254912
[https://twitter.com/KatjaGrace/status/1643342692905254912]  Response rates
still seem good to mention when mentioning the survey, but I don't currently
believe that getting a survey with a higher response rate would change the
results. Might be worth a bet?

71a3orn7h
Fair enough, didn't know about those steps. That does update me towards this
being representative.
[Feedback please] New User's Guide to LessWrong

22
Ruby
Site Meta
Personal Blog
11h

The LessWrong team is currently thinking a lot about what happens with new
users: both the bar of their contributions being accepted, how we deliver
feedback and restriction of not good contributions, but also most importantly,
how we get them onboarded onto the site

This is a draft of a document we'd present to new users to help them understand
what LessWrong is about. I'm interested in early community feedback about
whether I'm hitting the right notes here before investing a lot more in it.

This document also references another post that's something of more of a list of
norms, akin to Basics of Rationalist Discourse, though (1) I haven't written
that yet, (2) I'm much less certain about the shape or nature of it. I'll share
a post...

(Continue Reading – 1569 more words)
gilch2h62

I think it hits a lot of good notes, but I'm not sure if it's all of them we'd
need, and at the same time, I'm worried it may be too long to hit a new user
with all at once. I'm not sure what I'd cut. What would go in a TL;DR?

I maintain that the 12 Virtues of Rationality is a good summary but a poor
introduction. They seemed pretty useless to me until after I had read a lot of
the Sequences. Not beginner material.

Inferential distances and "scout mindset" might be worth mentioning.

I think Raising the Sanity Waterline (if you follow its links) is a great min...
(read more)

Reply

11Vladimir_Nesov4h
What does it matter what the community believes? This phrasing is a bit
self-defeating, deferring to community is not a way of thinking that helps with
arriving at true beliefs and good decisions. Also, I think references to what
distinguishes rationality
[https://www.lesswrong.com/posts/hN8Ld8YdqFsui2xgc/only-say-rational-when-you-can-t-eliminate-the-word]
from truth and other good things
[https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms]
are useful in that section (these posts are not even in the original sequences).

2Ruby3h
If you are joining a community and want to be accepted and welcomed, it matters
what they believe, value, and are aiming to do. For that matter, knowing this
might determine whether or not you want to be involved.  Or in other words, that
line means to say "hey, this is what we're about" I do like those posts quite a
fair bit. Will add.

11Vladimir_Nesov3h
The phrasing is ambiguous between descriptive of this fact and prescriptive for
it, especially for new people joining the community, which is the connotation
I'm objecting to. It's bad as an argument or way of thinking in connection with
that sentence, the implication of its relevance in that particular sentence is
incorrect. It's not bad to know that it's true, and it's not bad that it's true.