~dfa

Hey, welcome to my slice of the internet here in the club of tildes.

  • Some books I like
  • Some food I cook and eat
  • Arranging keys on my key holder is NP-hard.

    2015-10-20

    This past summer, I bought a key-holding gadget. ThereΒ΄s two screws on either end that pierce through the keys. The keys fold in on each other. I'm a little paranoia about losing my keys, so I figured it'd be a good way to make sure I don't lose any of them.

    But god forbid you ever have to change what keys you carry with you.

    I just bought a USB drive to put on it (I recently had a snafu with arch and had to chroot in. But I had to go a couple hours without my laptop because I didn't have a boot USB on me. Never again).

    Anyway.

    30 minutes into the process of rearranging my keys so that the USB drive would fit on it, I thought to myself: "this has to be NP-hard."

    And indeed it is.

    We reduce from PARTITION. (Really, they're nearly the same problem). Take a second and gaze at the key holder on the linked amazon page to get a sense for the problem. We want to arrange the keys so that all keys fit on the keyring and there's no "tangling" of keys in the middle.

    Call the length of the key holder l.

    Suppose I give you a bunch of keys to put on the key holder. Each key is very short - maybe $\frac{l}{3}$. So the length of the keys won't be an issue - it's mostly an issue of getting the heights to match. The keys are of varying heights.

    Certainly, the key holder needs to have equal height on each end. Additionally, every key must be present (like, why would you even buy one of these things if you're still going to have keys jangling around)?

    So then, if the sum of the heights of the keys is S, each side of the key holder must have height $\frac{S}{2}$.

    But this is just exactly PARTITION, which we know to be NP-complete.

    Sigh. Wikipedia says that there's a pseudo-poly time dynamic programming solution for PARTITION, which has led PARTITION to be called "The Easiest Hard Problem".

    Though to be frank, I don't know if I consider that to be much condolence.

    It's halting problems all the way down.

    2015-04-09

    Deciding things is hard. There are countably many Turing machines, but an uncountable number of decision problems! Sometimes, maybe asking our TM M for a decision is unreasonable - maybe sometimes we should let him phone-a-friend. A reasonable question to ask is: if our friend always gives us the right answer, is this enough?

    We define the phone-a-friend mechanism as follows: our TM M is given access to an oracle. The TM may ask the oracle a membership question: is this string \(w\) in some set \(S\)? The oracle is all-knowing and will return a yes or no answer immediately. The oracle will always answer correctly. We call a Turing machine with access to an oracle an oracle Turing machine.

    So certainly, all of a sudden, life gets a lot easier! For example, solving HALTS is trivial! Just ask the oracle. But here's an interesting question: is there a decision problem that a Turing machine can't solve, even when given an oracle to HALTS? Which is to say, is there a problem that our TM M's friend can't know?

    Unfortunately, (and maybe unsurprisingly), yes. Consider this following problem, which we call \(SUPERHALTS\):

    \[ \{(M, x) | M\} \]

    We can use the classic diagonalization argument to show that this is undecidable. Suppose we have some oracle Turing machine \(H\) that decides SUPERHALTS. Then we can define a new TM D to be:

    D(X): If H(X, X) accepts with an oracle, then LOOP else ACCEPT

    But then D(D) halts if and only if H(D, D) accepts. But H(D, D) accepts iff D(D) loops! So we have that D(D) halts if and only if it loops, a contradiction. Even if you've seen this argument before, take a minute and reason through that last sentence. It's good for you.

    So this is interesting. We've found a problem that's harder than the halting problem. Significantly so. Which brings us to something called Turing degrees. Computable functions have Turing degree 1. Anything reducible to the halting problem has Turing degree 2. The SUPERHALTS is our first problem with Turing greater than two.

    It's interesting how coarse a measurement Turing degree really is. Obviously, it doesn't touch notions of complexity, with no regard for the distinction between, say \(P\) and \(ELEMENTARY\). But further, it doesn't even distinguish between Turing-decidable and Turing-recognizable! (Or if you prefer, recursively enumerable). (A Turing-recognizable set is similar to a decidable one, we just relax the restriction that the TM must halt on all inputs).

    So here's another question: is there any problem of intermediate degree? Some problem that falls between HALTS and SUPERHALTS? This is known as the Post problem (different from the Post Correspondence Problem). And the answer, apparently, is yes.

    The result involves something called a priority ordering. In a priority ordering, we define some set \(X\). Then we make a (potentially infinite) list of requirements. Each of these requirements specifies whether or not some set of elements is in \(X\). So we start with, say, the universe. Then requirement 1 specifies that elements in \(X\) must have some feature. And requirement 2 does similarly. Maybe requirement \(k\) designates that some element get thrown back into \(X\). And so on.

    Anyway, this technique can be used to generate two problems A and B, both of which can be solved with an oracle to the halting problem, but neither can be solved with an oracle to the other! I guess you use the priority ordering technique to forbid any Turing machine that would reduce A to B or vice versa.

    And into the world of non-computability we go! And you thought complexity was bad...

    A Turing Machine Quine

    2015-03-13

    Today, we'll talk about something quite exciting. We define a Turing machine that prints its own source code. This construction offers us insight into how one may construct quines in any programming language.

    First, some quick definitions. A quine is a program that prints itself. At first this may seem impossible! A first attempt in python may look something like

    print "print"

    But wait. We missed the first print. So perhaps we'll add another print? But then we have

    print "print 'print'"

    and we have the problem we started with. Let's revisit this is a moment.

    A Turing machine is an abstraction of a computer. It has some finite number of states, transitions between those states, and infinite memory. Excitingly, this turns out to be a quite reasonable definition of computation. There's a very important result in computer science called the Church-Turing Thesis, which basically says that anything your-programming-language-here can do, so can a Turing machine.

    Consequently, offering a Turing machine quine is a way of offering a quine for every programming language! We'll find that it's actually quite instructive to talk about quines in the abstract first, before moving into specific programming languages.

    Right. So let's get started. We present the following lemma:

    There is a computable function \(q\), where if \(w\) is some string, \(q(w)\) is a description of a Turing machine that prints out \(w\) and halts.

    We offer the following TM as a construction of this function:

    Q = "On input string w: 1. Construct the following TM P_w: P_w = 'On any input: 1. Erase the input 2. Write w to the tape 3. Halt' 2. return P_w"

    The distinction between \(q\) the function and Q the Turing machine can be a bit subtle. \(q\) is function that maps strings to Turing machines. Q (the Turing machine) is the result of applying \(q\) (the function) to \(w\). That is, Q = \(q(w)\).

    So our TM Q takes a string w and outputs a TM that prints w. Perfect! Exactly what we wanted. Let's come back to this - we'll see why this is useful in a moment.

    With this lemma in hand, we proceed to the main task: building a TM that prints itself. We'll split the machine up into two parts - A and B. First A will run, then B. Let's start with a description for A.

    A's description depends on B, so let's assume we've written B. Remember the function \(q\) we just defined? We define A to be \(q(B)\). Which is to say, A is just a TM that, on any input, just prints a description of part B. This depends on our definition of B, so let's talk about that now.

    B's the second and last part of the program, so at the end, we should have printed a full description of AB. By the time we get to B, A just ran, leaving a copy of B's source code sitting on the tape. Which means at this point, B has a description of itself. So then how do we get a description of A?

    Here's the trick: we apply \(q\) to our description of B. By our definition, \(q(B)\) is a TM that, on any input, prints a copy of B. This was exactly our definition of part A! So B takes its own source code and applies \(q\) to it, obtaining a description of A. Then B outputs AB, completing the proof.

    To summarize:

    QUINE = "On input string w: 1. A = q(B) # A Turing machine that always prints B 2. B = 'On input M, where M is a part of a TM: 1. return q(M) + M'"

    Using this proof as a template, let's consider how we would write a quine in python. As before, let's consider part A first. Part A needs to give B a copy of B's source code. In the TM model, this was achieved by leaving a copy of B's description on the tape.

    In python, we can just assign into a variable to achieve the same effect. So our part A should look something like

    b = "b's source code here"

    Part B should print part A and then print part B. Something like:

    print "b = %s" % b # Print part A print b # Then print part B

    Combining these two together (along with some careful tiptoe-ing around python formatting) yields:

    b = 'print "b = %r" % b; print b' print "b = %r" % b; print b

    And there you have it! A general guideline to make quines followed by an example. You are now equipped to go out and impress all your friends with your quine-making abilities. :P

    Traversing the Infinite Complete Ο‰-nary Tree

    2016-03-03

    The infinite complete Ο‰-nary tree is one where every node has β„•-many children. There are no leaves; the tree just extends downward infinitely. Call this graph π”Š.

    We can't BFS or DFS over π”Š. A DFS would simply get stuck on the leftmost branch forever and a BFS would never reach depth 2. How then are we to traverse it?

    In the infinite complete binary tree, nodes are uniquely indentified by a finite length binary string. In π”Š, nodes are uniquely indentified by a finite sequence of natural numbers. Let s(v) be v's corresponding sequence. In π”Š, u is the parent of v iff s(u)'s length is one less than s(v)'s' and s(u) is a prefix of s(v).

    Any tree traversal produces a well order on the tree's vertices. BFS on the complete infinite binary tree is the shortlex ordering (sort first by length, then lexographically). In fact, on level i, the set of corresponding binary strings is the set of all i-bit natural numbers, and the nodes are visited in increasing order.

    Further, any tree tree traversal has order type Ο‰.

    A traversal of π”Š is a well order on the nodes of π”Š. What does this order look like? Here's the idea (nodes are represented as int tuples):

    {% highlight python %} def traverse(): visitedNextChild = {() : 0} while True: currentlyVisited = visitedNextChild.keys() for v in currentlyVisited: nextChild = v + (visitedNextChild[v],) visit(nextChild) visitedNextChild[v] += 1 visitedNextChild[nextChild] = 0 {% endhighlight %} Here, We start with the root node, which we can represent as the empty tuple. We maintain a mapping from visited nodes to the next child of theirs to visit. At each iteration, we visit each of the prescribed next children, and update the mapping.

    The fact that this visits every node in π”Š follows easily by induction.

    In math symbols, if Si is the set of visited nodes at iteration i, then


    $$ \begin{align*} S_{i+1} = S_i &\cup \{s + 0 \mid s \in S_i \} \\ &\cup \{s_1s_2\ldots (s_n+1)\mid s_1s_2\ldots s_n \in S_i \} \end{align*} $$

    (there are totally duplicates being added here, but that's the beauty of sets).

    Fix the nodes u = s1…snβ€…βˆ’β€…1 and v = s1…snβ€…βˆ’β€…1sn. Define t(x) to be the iteration at which x is visited. Then t(v)=t(u)+snβ€…+β€…1. This leads to this gorgeous fact:

    s1…sn is visited at iteration $\sum_{i=1}^n (s_i + 1) = n + \sum_{i=1}^n s_i$.

    This means that our tree traversal has a pretty interesting sub-relation: namely that u < v if u's length + u's digit sum is less than v's length + v's digit sum. Or, (if we one-index), just the digit sums.

    From here on out, we'll one-index for simplicity's sake. (That is, assume β„• starts at 1).

    Let's see if we can characterize the entire ordering. (That is, instead of building a relation based on iteration, build a relation built on precise ordering of traversal).

    It's exactly the same relation, but if they tied, you recurse on the largest proper prefix of each.

    {% highlight python %} def lessThan(u, v): # u < v return digitSum(u) < digitSum(v) or lessThan(u[::-1], v[::-1]) {% endhighlight %}

    So the empty sequence is the least element (as we visit the root of π”Š first). I'm fairly certain that if you create the corresponding ≀ relation, this becomes a total order.

    Here's the cool thing: we've produced an order on β„•* that has order type Ο‰! (The normal shortlex trick doesn't work when our alphabet is countably infinite).

    In general, if we want to produce an ordering of order type Ο‰ on
    β„•*
    , it suffices to partition β„•* into countably many partitions, each of finite size. Then the "concatentation" of these partitions yields order type Ο‰.

    Just some fun observations :)

    Long-form reading Chinese

    2018-01-04, updated 2020-05-17

    Study what you want to know.

    Do you want to pass standardized exams? If so, study those vocab lists. Do you want to read a book? If so, collect a list of the most frequently used words in the book and study those.

    Collecting vocab from books

    Open a txt file of the book you want to read in ChineseTextAnalyzer. For popular books, googling the Chinese title and Β΄mobiΒ΄ or Β΄txtΒ΄ usually turns up a copy of the book. will

    ChineseTextAnalyzer splits the file up into words (presumably with longest-match against CC-CEDICT). Based on a set of 'known words,' ChineseTextAnalyzer can give you a list of unknown words, sorted by frequency of appearance in the text. It's then super easy to export these words into a CSV file and import into Anki.

    I find this method of gathering vocab to be much more motivating that studying lists of words from a textbook. This way, I get longer-form reading with content I'm interested in, and a vocab list custom-tailored to the content I want to consume.

    (Shout out to the incredible 书博 for telling me about this one).

    Define-on-highlight in Kindle (with pinyin)

    I consider highlight-on-lookup an essential feature for reading 'real' Chinese long-form material. I don't have the patience to handwrite characters into Pleco every time I don't understand what's going on. Finding a good mobi file for the English-Chinese dictionary CC-CEDICT does wonders.

    I put this file in the documents/dictionaries directory on my Kindle; it might be different for your e-reader. If your e-reader doesn't support mobi, I suggest the wonderful tool ebook-convert as a means to convert it to epub.

    This isn't a perfect solution; CC-CEDICT is probably my least favorite English-Chinese dictionary around, but I have been unable to find any other dictionaries in electronic form. The Kindle has a English-Chinese dictionary by default, but for baffling reasons, it does not include pinyin.