frp.douyin-static.top Open in urlscan Pro
202.182.115.176  Public Scan

URL: https://frp.douyin-static.top/
Submission: On January 29 via api from US — Scanned from US

Form analysis 0 forms found in the DOM

Text Content

Mastodon Twitter
Blog
真岛建设的粉色泥头车@MoiraLinson@bgme.me
如龙8串 8 第四章主线的这里听说是如龙8天安门
2024-01-29 15:26:21
0
0
0
真岛建设的粉色泥头车@MoiraLinson@bgme.me
没能把他俩都撵进去每次都有一个留在外面就这样吧!
2024-01-29 15:13:41
1
0
0
真岛建设的粉色泥头车@MoiraLinson@bgme.me
龙8玩得好混乱,又想当江湖宝贝大师又想快点去太鼓岛又想推主线尽快知道以前的老角色们都怎么样了,然后沉迷夹娃娃机抬头一看没啥进度怎么又到十一点了
:blobcatsweats:
2024-01-29 15:08:05
0
0
0
真岛建设的粉色泥头车@MoiraLinson@bgme.me
啊啊啊啊啊啊啊啊这个收服过程是什么鬼啊啊啊啊啊啊啊啊笑死我了救命之前因为想不通为什么要为了娱乐而抓野生宝可梦让它们打架玩所以一直没玩那个系列,但江湖宝贝的设定我可以接受啊而且怪好玩的笑死
2024-01-29 13:35:29
1
0
0
cw049rtf@flymail.tk@cw049rtf@flymail.tk
2121
2024-01-29 11:44:15
0
0
0
worldwillgood@worldwillgood@mas.to
1231231
2024-01-29 09:01:41
0
0
0
worldwillgood@worldwillgood@mas.to
try
2024-01-29 09:01:21
0
0
0
Adrianna Tan@skinnylatte@hachyderm.io
Speaking of San Francisco restaurants and how often I’m disappointed by them,
this is one of my favorite ones: 点击查看 #SanFrancisco #BayArea #BayAreaEats #BAE
2024-01-29 07:38:30
1
0
0
djb@djb@infosec.exchange
Let’s stop anthropomorphizing & attributing intentionality to an imprinted
imitator & producer of grammatically, semantically and lexically correct human
word salad.“AI”/Machine “data imprinted”, Large Language Models (LLM) &
conversation “chat” interfaces do not have:- feelings- goals- relationships,
etc. so can’t hallucinate(perception without external stimulus),
lie(intentionally mislead) or sycophant(act obsequiously to gain advantage)
anymore than a typewriter or your smartphone can.
2024-01-29 07:22:30
0
0
0
Dawn Ahukanna@dahukanna@mastodon.social
Let’s stop anthropomorphizing & attributing intentionality to an imprinted
imitator & producer of grammatically, semantically and lexically correct human
word salad.“AI”/Machine “data imprinted”, Large Language Models (LLM) &
conversation “chat” interfaces do not have:- feelings- goals- relationships,
etc. so can’t hallucinate(perception without external stimulus),
lie(intentionally mislead) or sycophant(act obsequiously to gain advantage)
anymore than a typewriter or your smartphone can.
2024-01-29 07:22:30
0
1
0
Dawn Ahukanna@dahukanna@mastodon.social
@adredish Current “AI”/statistical inference (SI) tech represented by Machine
“data imprinted” (It’s not continuously learning/updating, with only 1 time data
set), Large Language Models (LLM) & conversation “chat” interfaces operate like
"Spread Betting"- 点击查看 - "Writing 'B.S.' cheques their 'models' can't cash".They
calculate & produce word combinations are semantically, grammatically &
contextually valid, not if they are "grounded" in our perceived & physical
reality.
2024-01-29 07:22:30
1
0
0
djb@djb@infosec.exchange
@dahukannaYeah, I hate the way they stole the term AI. As an old AI researcher,
it's really frustrating the way that they successfully took these two huge
fields (AI and ML) and narrowed them down to a single model, which is a very
poor model of general intelligence.PS. I think it is a very good model of
certain human behaviors - that of BS'ing. Humans also pattern-match to spin
stories that sound good and are meaningless, but to claim that's a model of
"intelligence" is insane. (As a teacher, it's explicitly what we are trying to
get our students NOT to do.)
2024-01-29 07:20:17
0
0
0
Redish Lab@adredish@neuromatch.social
@dahukannaYeah, I hate the way they stole the term AI. As an old AI researcher,
it's really frustrating the way that they successfully took these two huge
fields (AI and ML) and narrowed them down to a single model, which is a very
poor model of general intelligence.PS. I think it is a very good model of
certain human behaviors - that of BS'ing. Humans also pattern-match to spin
stories that sound good and are meaningless, but to claim that's a model of
"intelligence" is insane. (As a teacher, it's explicitly what we are trying to
get our students NOT to do.)
2024-01-29 07:20:17
1
2
0
Dawn Ahukanna@dahukanna@mastodon.social
@adredish glad U like it. Please promote & use it in social conversation
vocabulary as people hear“Artificial Intelligence” or “Machine Learning”, think
equivalent genius intelligence of Albert Einstein is encoded & defer because
it’s a computer.Unfortunately, it’s been imprinted from some of the most toxic,
“expressed-in-words” sources like bigoted & hate groups on Reddit, Twitter, etc.
That content is shredded/grated & spread throughout the model.Have you ever
tried unmelting grated cheese?
2024-01-29 07:20:17
1
0
0
Redish Lab@adredish@neuromatch.social
@dahukanna I really like that point that these current production "AI" systems
(Generative AI / Large Language Models) are "imprinting" not "learning" because
they are trained on a single data set and then set out into the wild to use that
data without correction.Real systems are constantly learning from experience.
2024-01-29 07:20:17
1
0
0
Dawn Ahukanna@dahukanna@mastodon.social
Current “AI”/statistical inference (SI) tech represented by Machine “imprinting”
(It’s not continuously learning/updating, with only 1 time update), Large
Language Models (LLM) & conversation “chat” interfaces are decimating cultural,
social & ethical assumptions (statements taken as facts, without proof) AKA
“norms”.As a society, we’re not prepared for impact of rethinking, reconsidering
& updating our current human practices & conventions with a non-human machine as
a participant.#AIEthics
2024-01-29 07:20:17
1
0
0
真岛建设的粉色泥头车@MoiraLinson@bgme.me
可恶啊之前一直在用的一个很干净的爬虫在线看电影的网站也开始跳色情网站广告了orz请问走过路过的有缘人还有没有这种网站的推荐?不行了的这家叫ikanbot
2024-01-29 06:28:19
2
0
0
浅水城的米需@dawningli@mona.do
自己造孽和同事讨论了中扎,同事的b站和小红书逐渐被中扎和张淇刷屏,所以看到了就发给我,造孽啊
2024-01-29 04:29:55
1
0
0
真岛建设的粉色泥头车@MoiraLinson@bgme.me
那个风萨满支线笑死啊,后面又碰到个求雪的大伯就想该不会去找了雪萨满,结果居然是婴儿组啊啊啊啊啊啊你们居然还在!!!桐生快来看啊这个人你也打过的吧???不过最后感觉好像给权田原组长封个雪萨满也没啥不可以的hhh
接下来会不会有火萨满的支线呢,正这么想着就看到街对面在跳火把舞的猪狩
2024-01-29 02:34:48
1
0
0
真岛建设的粉色泥头车@MoiraLinson@bgme.me
摸鱼狠狠爬大家的龙8串,发现我明明特地请了一天假在家打游戏进度却完全不像打了三天的样子😂感谢各位友折叠还加章节号!游戏进度慢的人看起来也毫无压力
:isuck:
2024-01-29 01:27:02
0
0
0
 * Previous
 * 1
 * 2
 * 3
 * 4
 * 5
 * Next


THANKS FOR READING!

© Aleksandr Hovhannisyan, 2019–Present

Last built on Tuesday, October 4, 2022 at 10:28 PM UTC

Twitter GitHub LinkedIn Sitemap RSS