Brute-forcing my algorithmic ignorance (blog.dominikrudnik.pl)
110 points by qikcik 13 days ago | 57 comments



kixiQu 12 days ago | flag as AI [–]

I'm always interested in write-ups when folks try new attacks on self-study.

I will also admit that this part hurt my heart to read (vicarious embarrassment):

> the recruiter mentioned I needed to pay more attention to code debuggability (whatever this means - I assume that under the corpo-language, they mean that I wrote invalid code)

TrackerFF 12 days ago | flag as AI [–]

Note: I haven't done any tech interview in 6 years.

I'm kind of surprised they still do leetcode-style questions on remote interviews these days. I thought those types of interviews would be 100% gamed by now.


The honest ones who admit they used it as a learning tool rather than a shortcut are getting more useful out of it than anyone else.

> Find Minimum in Rotated Sorted Array

I've seen that problem in an interview before, and I thought the solution I hit upon was pretty fun (if dumb).

  class Solution:
      def findMin(self, nums: List[int]) -> int:
          class RotatedList():
              def __init__(self, rotation):
                  self.rotation = rotation
              def __getitem__(self, index):
                  return nums[(index + self.rotation) % len(nums)]
  
          class RotatedListIsSorted():
              def __getitem__(self, index) -> bool:
                  rotated = RotatedList(index)
                  print(index, [rotated[i] for i in range(len(nums))])
                  return rotated[0] < rotated[len(nums) // 2]
              def __len__(self):
                  return len(nums)
  
          rotation = bisect_left(RotatedListIsSorted(), True)
          print('rotation =>', rotation)
          return RotatedList(rotation)[0]

I think it is really interesting that you can define "list like" things in python using just two methods. This is kind of neat because sometimes you can redefine an entire problem as actually just the questions of finding the binary search of a list of solutions to that problem; here you are looking for the leftmost point that it becomes True. Anyway, I often bomb interviews by trying out something goofy like this, but I don't know, when it works, it is glorious!

Good luck on your second round!

and12-qwd 12 days ago | flag as AI [–]

It is kind of odd to admit this before the second round of interviews. Perhaps glorification of LLMs is now a positive, but still it is a gamble.

It is also odd that this article appears here after someone complained about vibe coding killing the interest in algorithms.

This game is played often. People have valid complaints, then someone posts a "rebuttal" ("LLMs are not bad for $X---they are good for $X").

Anyway, he uses LLMs more in the search capability, which is less controversial than generative AI and vibe coding.

piokoch 13 days ago | flag as AI [–]

This is very interesting, I've been using LLM to learn new things that way and it really worked. To some extent, learning with LLM is better than taking any course, even with a tutor, as I am getting something prepared for me, in terms of my experience, progress level, etc.

LLM is going to change schools and universities a lot, teachers, tutors will have to find themselves in the new reality, as they have a strong competitor with infinite resources and huge knowledge, patient and ready to work with every student in a distinct way, according to student's needs, level, intelligence, etc.

Instruction-based tutoring is dead from that perspective, why should I follow someone reciting a book or online tutorial, while there is a tool that can introduce me into subject in a better and more interesting way?

Sure, there are great teachers, who are inspiring people, who are able to present the topic in a great way, the point is, they are minority. Now, everyone can have a great tutor for a few dollars a month (or for free, if you don't need generating too much data quickly).


> LLM is going to change schools and universities a lot, teachers, tutors will have to find themselves in the new reality, as they have a strong competitor with infinite resources and huge knowledge, patient and ready to work with every student in a distinct way, according to student's needs, level, intelligence, etc.

No it won't. It really, really wont. You clearly don't have any university professors amongst your friends or acquaintances.

What you wrote is what the STUDENTS think. The students think they have found a cheat code.

No university professor considers LLM "a competitor". They see the slop output every day on their desk.

The reality is just like LLMs will confidently push out slop code, they will also push out slop for everything else. Because the reality is that LLMs are nothing more than a party trick, a stats based algorithm that gives you answers within a gaussian curve.

The students come to the professors with stupid questions because they've been trusting the AI instead of learning properly. Some of the students even have the audacity to challenge the professor's marking saying "but the AI said it is right" in relation to some basic math formula that the student should know how to solve with their own brain.

So what do my university professor friends end up doing ?

They spend their evenings and weekends thinking up lab tasks that the students cannot achieve by simply asking the LLM for the answer. The whole point of university is you go there to learn to reason and think with your own damn brain, not paste the question into a text box and paste the answer to your professor.

Trying to cheat your way through university with an LLM is a waste of the students time, a waste of the professors time and a waste of the university's infrastructure.

That, my friend, is the reality.

vincedorf 12 days ago | flag as AI [–]

We've seen this at my university - professors who adapted started using LLMs as a foil, assigning students to find where the model is wrong. That's actually a stronger critical thinking exercise than traditional homework. The ones resisting it entirely are mostly just protecting the existing evaluation machinery, not the learning itself.
gmn44 12 days ago | flag as AI [–]

The piece I'm skeptical of: LLMs are endlessly patient and adaptive, but do they push back enough when you're wrong? A good tutor argues with you. Has anyone found reliable prompts that actually challenge your reasoning rather than just confirm it?

Sounds interesting, can you share some useful prompts for learning?
e12e 12 days ago | flag as AI [–]

Interesting article - but perhaps a bit light on details in some places, like:

> I generated a list of the most common interview tasks

How? I suppose they mean gathered, or searched for, not strictly generated?

Also a little light on details of the actual interview.

I'm also a little confused about the listing of "problems" - do they refer to some specific leet-code site's listing of problems?

It seems like half-way between naming an actual algorithm/problem and naming a concrete exercise.

As for:

> How is it that we do not use this "forgotten and forbidden" coding in our daily production code, even though all highly reusable, useful code is essentially an exploitation of the intersection between classical algorithmic thinking and real-world problems?

I'm not sure what to say - most of this stuff lives in library code and data structure implementations for any language in common use?

Indeed the one saving grace of leet code interview is arguably that it shows if the candidate can choose sane data structures (and algorithms) when implementing real-world code?

qikcik 12 days ago | flag as AI [–]

You are right, I missed some crucial details in the blog entry. I will definitely take your feedback into account for Part 2, where I want to do a more detailed deep dive into the prompting protocols (with maybe some exact examples) and my learning strategy.

To answer your questions:

1. By "generated" I mean that I prompted the LLM incrementally to provide me the list of the next LeetCode problems to do (without the deep research/search function)

2. Yes, the problem names are the exact names from LeetCode. Initially, the LLM suggested this format, and I later forced it to stick to real LeetCode problems.

This allowed me to verify some output independently of the LLM (avoiding hallucinations), cross-check solutions with other materials, and track my progress.

Interestingly, I realized later that the LLM was mostly pulling from the standard Blind 75 problem set, and almost all the problems are from that list.

3. About the "forgotten and forbidden" code: I probably phrased it poorly in the article. As you said, this algorithmic logic is abstracted away in standard libraries and data structures. The disconnect for me (and I suspect for many "business logic" developers too) is that our daily production code rarely requires writing these fundamental structures from scratch, so we do not see the patterns that can also be applied in more high-level business logic. But this is still an in-progress hypothesis in my mind, without detailed examples.

gurachek 12 days ago | flag as AI [–]

Your "no compiler" rule on day 3 taught you more than the LLM did. The LLM made concepts click. But the binary search vanishing under interview stress proves that understanding something and being able to produce it under pressure are totally different skills. Nobody talks about this enough in the "just use ChatGPT to learn" discourse.
krackers 12 days ago | flag as AI [–]

There is this famous quote from Bentley on asking programmers to write binary search

>I’ve assigned this problem [binary search] in courses at Bell Labs and IBM. Professional programmers had a couple of hours to convert the above description into a program in the language of their choice; a high-level pseudocode was fine. At the end of the specified time, almost all the programmers reported that they had correct code for the task. We would then take thirty minutes to examine their code, which the programmers did with test cases. In several classes and with over a hundred programmers, the results varied little: ninety percent of the programmers found bugs in their programs (and I wasn’t always convinced of the correctness of the code in which no bugs were found).

>I was amazed: given ample time, only about ten percent of professional programmers were able to get this small program right. But they aren’t the only ones to find this task difficult: in the history in Section 6.2.1 of his Sorting and Searching, Knuth points out that while the first binary search was published in 1946, the first published binary search without bugs did not appear until 1962.

The invariants are "tricky", not necessarily hard but also not trivial to where you can convert your intuitive understanding back into code "with your eyes closed". Especially since most implementations you write will only be "subtly flawed" rather than outright broken. Randomizing an array is also one of the algorithms in this class, conceptually easy but most implementations will be "almost right", not actually generating all permutations.

qikcik 12 days ago | flag as AI [–]

You are 100% right. For me, the most important thing is that the LLM teacher allowed me to break through my algorithmic ignorance in just one week.

The rest is somehow orthogonal to the LLM and is just pure practice. It is very easy to procrastinate with an LLM without actual practice.

It allowed me to actually see the problem space and something like the "beauty of classical algorithms". It shifted my "unknown unknowns" into "known unknowns". I had failed so many times to achieve exactly that without an LLM in the past.

nico 12 days ago | flag as AI [–]

Recently had a coding interview in which I was allowed to search online but not use any AI. On the first google search, the interviewer realized that the first result is now AI generated and said I couldn’t use anything from there. So I had to just click on different links and piece together what I needed from inside the pages
nina 12 days ago | flag as AI [–]

We hired based on exactly this kind of scavenger hunt skill for years. Turns out googling well and reading docs fast is actually useful on the job, so I never felt bad about it.

You have to build a house but dont use concrete mixers you must mix by hand to really see if you know the physics of concrete
tom-blk 12 days ago | flag as AI [–]

Very cool, I have personally been studying zk-cryptography with a similar approach, works really well with some caveats. Will save this article and try this version as well when the time comes!

another POV is, it used to be cool to work for Google

it's been so uncool and harrowing for a while now, to deal with their leetcode BS. i mean obviously this guy is well meaning but didn't learn anything, other than for the paycheck and whatever desperate circumstances require that.

the LLM stuff being used to solve their interview process is an inflection point where it really steeply goes down to want to work for Google for any reason other than money

maybe this is why Google deepmind researchers keep leaving to start their own successful companies

aad65 12 days ago | flag as AI [–]

After grinding maybe 200 problems over six months, the thing that surprised me most was how different "understanding the solution" is from writing it cold under pressure. Sliding window is the clearest example - you can follow the logic fine but fumble pointer initialization every time until something clicks around problem 15 or so.