34 stories
·
0 followers

FAQ About Being Ghosted After Your Final Interview

1 Share

Q: I haven’t heard anything since my final interview. Who should I contact?

A: Damn, that’s crazy. Wow.

Q: How long will it take to hear back?

A: It will take some time. (If you’re successful.)

Q: And what if I’m unsuccessful?

A: You will know if you’re unsuccessful.

Q: How?

A: You won’t be working here.

Q: Well, yes, but won’t you be telling me that I didn’t get the job?

A: Why would we do that?

Q: Wait. Have I been ghosted?

A: We prefer the term “unworthy of closure.”

Q: What? Why have I been ghosted?

A: It could be that you’re arrogant. It could be that you’re humble. It could be that you’re too boisterous or too quiet. It could be you didn’t ask enough questions or you asked too many. It could be because you brought up working from home too soon. Or too late. It could be your overall personality and dislikability. It could be because you’re obviously pregnant. Ultimately, it’s because you don’t deserve this job, skills-wise or as a human being.

Q: Was there anything I could’ve done?

A: No. But also yes.

Q: That’s confusing. Could you please explain?

A: You could’ve been an overall better and more deserving person, although not too much better.

Q: That doesn’t help with my confusion. What do you mean “not too much better”?

A: If you met all the requirements, were totally qualified for the role, and would be a top performer almost immediately, you’d threaten the hiring manager’s ego. Try to have a bit of compassion, would you? (This might be why you’re not getting the job.)

Q: Was it something to do with my salary expectations?

A: We don’t usually offer employment to people who require a market-rate salary.

Q: I just… isn’t it common human decency to let someone know if they got the job or not? I spent a lot of time and effort in this process; haven’t I got the right to some sort of closure?

A: A company doesn’t ghost you and then expect you to show up and do the job, do they? They ghost you because you didn’t get the job. (Again, because you’re undeserving.) That should be closure enough.

Q: I can’t help but think it’s a bit rude. What about feedback on improving for any future interviews?

A: We gave you clear feedback: Be (a [little] bit) better (but not too much).

Q: That’s very nonspecific. Isn’t there anything at all you could help me with?

A: [Candidate is becoming needy. Classic anxious-attachment style. Not a culture fit.]

Q: Hello?

A:

Read the whole story
prirai
50 days ago
reply
Share this story
Delete

Random Thoughts on AI (Human Generated)

1 Share

 (I wrote this post without any AI help. OH- maybe not- I used spellcheck. Does that count? Lance claims he proofread it and found some typos to correct without any AI help.)

Random Thought on AI

I saw a great talk on AI recently by Bill Regli, who works in the field. 

Announcement of the talk: here

Video of the talk:  here

-----------------------------------------------
1) One item Bill R mentioned ws that AI requires lots of Energy so
3-mile Island is being reopened. See here.

Later I recalled the song

        The Girl from 3-Mile Island

to the tune of

        The Girl from Ipanema.

The song is in my audio tape collection but that is not useful so I looked for it on the web. The copy on YouTube doesn't work; however, this website of songs about 3-mile island here included it.

In the 1990's I was in charge of the Dept Holiday Entertainment since I have an immense knowledge of, and collection of, novelty songs- many in CS and Math.

Today- My talents are no longer needed as anyone can Google Search and find stuff. I did a blog on that here. I still have SOME advantage since I know what's out there, but not as much. Indeed, AI can even write and sing songs. I blogged about that and pointed to one such song here.

SO, some people's talents and knowledge are becoming obsolete.  On the level of novelty songs I am actually HAPPY that things change- I can access so much stuff I could not before. But humans becoming obsolete is a serious issue of employment and self worth. Far more serious then MACHINES TAKE OVER THE WORLD scenarios.

---------------------------------------------------------
2) When technology made farming jobs go away, manufacturing jobs took their place. That was true in the LONG run, but in the SHORT run there were starving ex-farmers. The same may happen now.

(ADDED LATER; someone emailed me that Machines taking over farming and other things has caused standards of living to go up. YES, I agree- in the LONG run very good, but in the short run people did lose their livelihoods.)

Truck Drivers and Nurses may do better than Accountants and Lawyers:

Self Driving trucks are 10 years away and always will be.
Nurses need to have a bedside manner that AI doesn't (for now?).

One ADVANTAGE of AI is that if it makes white collar workers lose jobs the government might get serious about

Guaranteed Basic Income, and

Univ. Health care

(ADDED LATER: someone emailed me that there GBI is not the way to go. Okay, then I should rephase as when white collar workers lose their jobs then the problem of a social saftey net will suddently become important.) 

Similar: If global warming makes the Cayman Island sink then suddenly Global Warming will be an important problem to solve.

------------------------------------------------
3) An example of AI taking away jobs is the Writers Strike.

OLD WAY: There were 10 people writing Murder She Wrote Scripts.

NEW WAY: AN AI generates a first draft and only needs 2 people to polish it.

KEY: In a murder mystery the guilty person is an innocuous character you saw in the first 10 minutes or a celebrity guest star. Sometimes the innocuous character is the celebrity guest star.

-------------------------------------------------
4) ChatGPT and school and cheating.

Calculator Scenario: We will allow students to use Chat GPT as we now allow calculators. Students are not as good at arithmetic, but we don't care.  Is Chat GPT similar?

Losing battle scenario: Ban Chat GPT

My solution which works--- for now: Ask questions that Chat GPT is not good at, allow chat GPT, insist the students understand their own work, and admit they used it. Works well in Grad courses and even senior courses. Might be hard in a Freshman courses.

Lance's Solution--- Stop giving out grades. See here

----------------------------------------------
5) Bill R said that we will always need humans who are better at judgment.

Maybe a computer has better judgment. I blogged on this here

 --------------------------------------------------
6) I asked two AI people at lunch if the AI revolution is just because of faster computers and hence is somewhat limited. They both said YES.

SO- could it be that we are worrying about nothing?

This also may be an issue with academia: if we hire lots of AI people because it's a hot area, it may cool off soon. Actually I thought the same thing about Quantum Computing, but I was wrong there.

----------------------------------------------
7) LLM's use LOTS of energy. If you get to ask one How do we solve global warming? they might say

First step: Turn me off!

----------------------------------------------
8) Scott did a great  blog post about the ways AI could go. See here.

--------------------------------
9) I recently emailed Lance a math question.

He emailed me the answer 5 minutes later.

I emailed that I was impressed

He emailed that he just asked  Chat GPT. He had not meant to fool me, he just assumed I would assume that. Like if you asked me what 13498*11991 was and I answered quickly you would assume I used a calculator. And if there is a complicated word in this post that is spelled correctly then you would assume I used spellcheck - and there is no embarrassment in that.

--------------------------------
10) If a painting is done with AI does any human get credit for it?

I always thought that people who forge paintings that look JUST LIKE (say) a van Gogh should be able to be honest about what they do and get good money since it LOOKS like a van Gogh who cares that it is NOT a van Gogh.  Same with AI- we should not care that a human was not involved.

IF an AI finds a cure for cancer, Great!

If an AI can write a TV series better than the human writers, Great!

--------------------------------------------------------
11) AI will force us to make moral choices. Here is a horrifying scenario:

Alice buys a self-driving car and is given some options, essentially the trolley problem:

If your car has to choose who to run over, what do you choose?

You have the option of picking by race, gender, age, who is better dressed, anything you want.

-------------------------------------------------------
12) Climate Change has become a political problem in that

Democrats think it IS a problem
Rep think it is NOT a problem

Which is a shame since free-market solutions that would normally appeal to Reps are not being done (e.g., a Carbon Tax). Indeed, we are doing the opposite- some states impose a tax on Hybrid cars


SO- how will AI go with politics? Scenarios

a) Dems are for regulation, Reps are against it. Elon Musk worries about AI and he is a powerful Rep so this might not happen.  Then again, he supports Reps, many of whom have BANNED E-cars in their states

b) AI-doomsayers want more regulation, AI-awesomers do not, and this cuts across party lines.

c) We will ignore the issue until it's too late.

If I was a betting man ...

----------------------------------------------------------
13) International cooperation on being careful with AI. Good luck with that.

My cynical view: International Treaties only work when there is nothing at stake

The Chem Weapons ban works because they are hard to use anyway.

The treaty on exploring Antarctica was working until people found stuff there they wanted. It is now falling apart

Read the whole story
prirai
78 days ago
reply
Share this story
Delete

It’s Harvard time, baby: “Kerfuffle” is what you call it when you completely botched your data but you don’t want to change your conclusions.

1 Share

For it’s Harvard this, an’ Harvard that, an’ “Give the debt the boot!”
But it’s “Academic kerfuffle,” when the guns begin to shoot.

— Rudyard Kipling, American Economic Review, May, 1890.

Remember the “Excel error”? This was the econ paper from 2010 by Reinhart and Rogoff, where it turned out they completely garbled their analysis by accidentally shifting a column in their Excel table, and then it took years for it to all come out? And this was no silly Psychological Science / PPNAS bit of NPR and Gladwell bait; it was a serious article with policy relevance.

At the time this story blew up, I had some sympathy for Reinhart and Rogoff. Nobody suggested that they’d garbled their data on purpose. Even aside from that, the data analysis does not seem to have been so great (see here for some discussion), but lots of social scientists are not so great with statistics, and even if you disagree with Reinhart and Rogoff’s policy recommendations, you have to give them credit for attacking a live research problem. In this post from 2013, I criticized Reinhart and Rogoff for not admitting they’d messed up (“I recommend they start by admitting their error and then going on from there. I think they should also thank Herndon, Ash, and Pollin for finding multiple errors in their paper. Admit it and move forward.”), while at the same time recognizing that researchers are not trained to admit error. I was disappointed with the behavior of the authors of that paper after they were confronted with their errors, but I was not surprised or very annoyed.

But then I read this post by Gary Smith and now I am kinda mad. Smith writes:

In 2010, two Harvard professors, Carmen Reinhart and Ken Rogoff, published a paper in the American Economic Review, one of the world’s most-respected economics journals, arguing that when the ratio of a nation’s federal debt to its GDP rises above a 90% tipping point, the nation is likely to slide into an economic recession. . . .

Reinhart/Rogoff had made a spreadsheet error that omitted five countries (Australia, Austria, Belgium, Canada, and Denmark). Three of these countries had experienced debt/GDP ratios above 90% and all three had positive growth rates during those years. In addition, some data for Australia (1946–50), Canada (1946–50), and New Zealand (1946–49) are available, but were inexplicably not included in the Reinhart/Rogoff calculations.

The New Zealand omission was particularly important because these were four of the five years when New Zealand’s debt/GDP ratio was above 90%. Looking at all five years, the average GDP growth rate was 2.6%. With four of the five years excluded, New Zealand’s growth rate during the remaining high-debt year was a calamitous -7.6%.

There was also unusual averaging. . . . The bottom line is that Reinhart and Rogoff reported that the overall average GDP growth rate in high-debt years was a recessionary -0.1% but if we fix the above problems, the average is 2.2%.

That part of the story I’d heard before. But then there was this:

In a 2013 New York Times opinion piece, Reinhart and Rogoff dismissed the criticism of their study as “academic kerfuffle.”

C’mon. You are two Harvard professors; you published an article in an academic journal, leveraged the reputation of academic economics to make policy recommendations to the U.S. congress, and then you talk about “academic kerfuffle”! If you don’t want “academic kerfuffle,” maybe you should just write op-eds, maybe start a radio call-in show, etc.

It’s Harvard this, an’ Harvard that, when all is going well. But then when some pesky students and faculty at faculty at the University of Massachusetts check your data and find that you screwed everything up, then it’s academic kerfuffle!

UMass, can you imagine? The nerve of those people!

So, yeah, now I’m annoyed at Reinhart and Rogoff. If you don’t like academic kerfuffle, get out of goddam academia already. For a pair of decorated Harvard professors to dismiss serious criticism as “kerfuffle”—that’s a disgrace. It was a disgrace in 2013 and it remains a disgrace until they apologize for this anti-scientific, anti-scholarly attitude.

P.S. Just for some perspective on the way that work had been hyped, here’s a NYT puff piece on Reinhart and Rogoff from 2010:

Like a pair of financial sleuths, Ms. Reinhart and her collaborator from Harvard, Kenneth S. Rogoff, have spent years investigating wreckage scattered across documents from nearly a millennium of economic crises and collapses. They have wandered the basements of rare-book libraries, riffled through monks’ yellowed journals and begged central banks worldwide for centuries-old debt records. And they have manually entered their findings, digit by digit, into one of the biggest spreadsheets you’ve ever seen.

OK, you can’t fault the Times for a puff that appeared nearly three years before the error was reported. Still, it’s kinda funny that, of all things, they were praising those researchers for . . . their spreadsheet!

Read the whole story
prirai
95 days ago
reply
Share this story
Delete

⌥ The Generation Generation

1 Share

A little over two years after OpenAI released ChatGPT upon the world, and about four years since Dall-E, the company’s toolset now — “finally” — makes it possible to generate video. Sora, as it is called, is not the first generative video tool to be released to the public; there are already offerings from Hotshot, Luma, Runway, and Tencent. OpenAI’s is the highest-profile so far, though: the one many people will use, and the products of which we will likely all be exposed to.

A generator of video is naturally best seen demonstrated in that format, and I think Marques Brownlee’s preview is a good place to start. The results are, as I wrote in February when Sora was first shown, undeniably impressive. No matter how complicated my views about generative A.I. — and I will get there — it is bewildering that a computer can, in a matter of seconds, transform noise into a convincing ten-second clip depicting whatever was typed into a text box. It can transform still images into video, too.

It is hard to see this as anything other than extraordinary. Enough has been written by now about “any sufficiently advanced technology [being] indistinguishable from magic” to bore, but this truly captures it in a Penn & Teller kind of way: knowing how it works only makes it somehow more incredible. Feed computers on a vast scale video which has been labelled — partly by people, and partly by automated means which are reliant on this exact same training process — and it can average that into entirely new video that often appears plausible.1 I am basing my assessment on the results generated by others because Sora requires a paid OpenAI account, and because there is currently a waiting list.

There are, of course, limitations of both technology and policy. Sora has problems with physics, the placement of objects in space, and consistency between and within shots. Sora does not generate audio, even though OpenAI has the capability. Prompts in text and images are checked for copyright violations, public figures’ likenesses, criminal usage, and so forth. But there is no meaningful restrictions on the video itself. This is not how things must be; this is a design decision.

I keep thinking about the differences between A.I. features and A.I. products. I use very few A.I. products; an open-ended image generator, for example, is technically interesting but not very useful to me. Unlike a crop of Substack writers, I do not think pretending to have commissioned art lends me any credibility. But I now use A.I. features on a regular basis, in part because so many things are now “A.I. features” in name and by seemingly no other quality. Generative Remove in Adobe Lightroom Classic, for example, has become a terrific part of my creative workflow. There are edits I sometimes want to make which, if not for this feature, would require vastly more time which, depending on the job, I may not have. It is an image generator just like Dall-E or Stable Diffusion, but it is limited by design.

Adobe is not taking a principled stance; Photoshop contains a text-based image generator which, I think, does not benefit from being so open-ended. It would, for me, be improved if its functionality were integrated into more specific tools; for example, the crop tool could also allow generative reframing.

Sora, like ChatGPT and Dall-E, is an A.I. product. But I would find its capabilities more useful and compelling if they were a feature within a broader video editing environment. Its existence implies a set of tools which could benefit a video editor’s workflow. For example, the object removal and tracking features in Premiere Pro feel more useful to me than its ability to generate b-roll, which just seems like a crappy excuse to avoid buying stock footage or paying for a second unit.

Limiting generative A.I. in this manner would also make its products more grounded in reality and less likely to be abused. It would also mean withholding capabilities. Clearly, there are some people who see a demonstration of the power of generative A.I. as a worthwhile endeavour unto itself. As a science experiment, I get it, but I do not think these open-ended tools should be publicly available. Alas, that is not the future venture capitalists, and shareholders, and — I guess — the creators of these products have decided is best for us.

We are now living in a world of slop, and we have been for some time. It began as infinite reams of text-based slop intended to be surfaced in search results. It became image-based slop which paired perfectly with Facebook’s pivot to TikTok-like recommendations. Image slop and audio slop came together to produce image slideshow slop dumped into the pipelines of Instagram Reels, TikTok, YouTube Shorts. Brace yourselves for a torrent of video slop about pyramids and the Bermuda triangle and pyramids. None of these were made using Sora, as far as I know; at least some were generated by Hailuo from Minimax. I had to dig a little bit for these examples, but not too much, and it is only going to get worse.

Much has been written about how all this generative stuff has the capability of manipulating reality — and rightfully so. It lends credence to lies, and its mere existence can cause unwarranted doubt. But there is another problem: all of this makes our world a little bit worse because it is cheap to produce in volume. We are on the receiving end of a bullshit industry, and the toolmakers see no reason to slow it down. Every big platform — including the web itself — is full of this stuff, and it is worse for all of us. Cynicism aside, I cannot imagine the leadership at Google or Meta actually enjoys using their own products as they wade through generated garbage.

This is hitting each of us in similar ways. If you use a computer that is connected to the internet, you are likely running into A.I.-generated stuff all the time, perhaps without being fully aware of it. The recipe you followed, the repair guide you found, the code you copy-and-pasted, and the images in the video you watched? Any of them could have been generated in a data farm somewhere. I do not think that is inherently bad, though it is an uncertain feeling.

I am part of the millennial generation. I grew up at a time in which we were told we were experiencing something brand new in world history. The internet allowed anyone to publish anything, and it was impossible to verify this new flood of information. We were taught to think critically and be cautious, since we never knew who created anything. Now we have a different problem: we are unsure what created anything.


  1. Without thinking about why it is the case, it is interesting how generative A.I. has no problem creating realistic-seeming text as text, but it struggles when it is an image containing text. But with a little knowledge about how these things work, that makes sense. ↥︎

Read the whole story
prirai
100 days ago
reply
Share this story
Delete

Favorite Theorems: The Complete List

2 Shares

Now in one place all of my sixty favorite theorems from the six decades of computational complexity (1965-2024).

2015-2024

1985-1994

To mark my first decade in computational complexity during my pre-blog days, I chose my first set of favorite theorems from that time period for an invited talk and paper (PDF) at the 1994 Foundations of Software Technology and Theoretical Computer Science (FST&TCS) conference in Madras (now Chennai), India. The links below go to the papers directly, except for Szelepcsényi’s, which I can't find online.
1975-1984 (From 2006)

Will I do this again in ten years when I'm 70? Come back in 2034 and find out.
Read the whole story
prirai
105 days ago
reply
Share this story
Delete

Updated Movie Ratings

1 Share

Read the whole story
prirai
105 days ago
reply
Share this story
Delete
Next Page of Stories