Not really blogging

A. Makelov

Category Archives: Mathematics

My (current) LaTeX configuration: a relatively easy way to learn how to take LaTeX notes in real time

This is the product of several years of meticulously writing LaTeX documents, coupled with 3 or 4 sudden realizations of the form “I’ve been doing this wrong the whole time, and here‘s how do to it faster”. The end result is that I can take notes in LaTeX as math people write stuff on the board and say all kinds of complicated nonsense, and the notes are just as understandable and readable as the real thing (hey I’m not a magician yet so I can’t actually make it more understandable). And all that without having to memorize unnatural keyboard shortcuts.

Starting with the OS, I’m running Linux, more specifically an Ubuntu Virtual Machine. I have 4GB of RAM and a reasonably fast CPU, so I can pull it off.

The editor I’m using is gummi. It also has a version for Windows which they say is super unstable; I’ve used it a couple of times for minor things and it behaved OK. Maybe it’s altogether OK, but I don’t know. Edit: actually, a friend of mine had been using the Windows version for several years and had like zero problems, so maybe you should have more confidence in it!

There are (at least) two very important things about gummi: live preview, and snippets.

The live preview is an extremely light .pdf viewer embedded in your editor window, on the right side of the actual .tex code. It shows how your document would look if compiled and exported to .pdf. It refreshes automatically  (so you don’t have to click anything) every n seconds where n is a natural number between 1 and 60 but otherwise entirely up to you. It’s super useful in terms of knowing what you’re doing and how things look. It gets a little unwieldy when your document is over 50 pages or something, but that’s not a big issue for me right now. Also, you can always put different parts of your document in different files to avoid that, and play around with commenting/uncommeting \include statements.

Edit: One important thing I forgot to mention is that the preview scrolls up and down automatically as you edit different parts of the file, at least for documents that aren’t too long. Also, by doing Ctrl + click on some part of the preview, the cursor in your code goes to the place producing the thing you clicked on! That’s really neat.

The snippets allow you to replace a (usually short) sequence of non-space characters with some predefined text, in such a way that your cursor will be positioned wherever you want in that text, and each next time you press Tab, it goes to another place you want it to be at. So for example, I have a snippet “frac”, which does the following: whenever I type “frac”, and press Tab when I’m right after the “c” at the end, it produces \frac{}{}, puts my cursor in the numerator placeholder, and pressing Tab again moves it to the denominator placeholder and finally outside the environment. You can do this with more complicated commands too, and add additional functionality. Basically, just  make snippets for all your theorem-ish environments, your equation environments, your bracketing (matching left/right parentheses, square brackets, and those absolute value things whatever they’re called) and all your special operators like summation, probability, limsup, etc., and you’ll be in good shape. Gummi comes with some pre-loaded snippets, too. The result is that you don’t have to use your mouse/touchpad to transition into and out of complicated environments, and you can generate common environments fairly easily.

So I guess the moral is this: even if you don’t use gummi, find an editor which has the above two functionalities. They, when used reasonably and combined with a reasonable preamble full of reasonable macros (for example \def\N{\mathbb{N}} is super reasonable) can actually give you amazing results in terms of speed! Happy nerding out!

 

Advertisements

What does the Banach-Tarski theorem have to do with the axiom of choice?

`What’s an anagram of Banach-Tarski?’

`Banach-Tarski Banach-Tarski.’

Banach and Tarski

The Banach-Tarski theorem says the following:

Theorem 1 [Banach-Tarski theorem] Given a solid ball in 3-dimensional Euclidean space {\mathbb{R}^3}, we can partition it into a finite number of pieces, so that we can rearrange them to get two solid balls congruent to the first ball.

This clearly goes against people’s intuitions about volume. Often, it is noted that the Banach-Tarski theorem is a consequence of the axiom of choice; this is inherent, since it is in fact equivalent to the axiom of choice. In one of its most strikingly obvious formulations the latter says:

[Axiom of choice] The product of a collection of nonempty sets is nonempty.

That is, for any collection of sets {\{S_i\}_{i\in I}}, where the index set {I} can be an arbitrary set, one can find an indexed family {S = (s_i)_{i\in I}} such that {s_i\in S_i} for all {i\in I}.

So, we have these two things which are equivalent, and one is completely, obviously true, whereas the other is completely, obviously false. OK.

What do these two things have to do with each other? Here’s the beginning of an answer. Our intuitions about concepts like volume and area are formalized in mathematics through what is called a measure. Here’s a definition that summarizes the intuitively desirable properties of such a measure:

Definition 2 A measure on {\mathbb{R}^n} is a non-negative, translation-invariant, countably additive function {\mu:\mathcal{P}(\mathbb{R}^n)\to\mathbb{R}} that assigns to each parallelepiped its volume. That is to say,

  1. {\forall S\subset\mathbb{R}^n, \mu(S)\geq0}.
  2. {\forall S\subset\mathbb{R}^n, v\in\mathbb{R}^n, \mu(S) = \mu(S+v)}
  3. {\forall S_1,S_2,\ldots \subset \mathbb{R}^n} disjoint,

    \displaystyle \begin{array}{rcl} \mu\left(\displaystyle\bigcup_{i=1}^\infty S_i\right)=\displaystyle\sum_{i=1}^\infty \mu(S_i) \end{array}

  4. {\forall a_i,b_i\in\mathbb{R}, \mu( \prod_{i=1}^{n} [a_i,b_i])= \prod_{i=1}^{n} (b_i-a_i)}.

We would then hope to be able to build up complicated sets from many parallelepipeds. Or something like that in any case.

Why do we allow only countably many sets in (2)? Well, the (only!) alternative is to allow at least uncountably many sets, which would then imply that the measure of the entire {\mathbb{R}^n} is equal to the sum of the measures of the points; but points should have zero volume! So our condition (3) is actually pretty liberal.

However, it turns out that in the above definition we wanted too much, i.e. it is inconsistent:

Theorem 3 There exists a subset {A\subset\mathbb{R}} for which {\mu(A)} doesn’t exist.

Proof: Consider the equivalence relation on {[0,1]} given by

\displaystyle \alpha\sim\beta \iff \alpha-\beta\in\mathbb{Q}

By the axiom of choice, we can pick an indexed collection of representatives for each equivalence class; call the set underlying this collection of representatives {A}. To really convince yourself that this is indeed a set (because that’s tricky business), you should play around with the axioms of Zermelo-Fraenkel set theory and the definition of an indexed collection. We shall show that {\mu(A)} doesn’t exist.

Assume the opposite, and consider, for each {r\in\mathbb{Q}\cap[-1,1]}, the sets

\displaystyle A_r = \{ a + r \ \big| \ a\in A\}

Observe that the {A_r} are disjoint, for otherwise we would have {a_1\pm r_1=a_2\pm r_2} for {a_1\neq a_2\in A} and {r_1,r_2\in\mathbb{Q}\cap[0,1]}, and hence {a_1-a_2\in\mathbb{Q}}, contradicting the fact that {a_1,a_2} are in different equivalence classes.

On the other hand, {[0,1]\subset\displaystyle\bigcup_{r} A_r\subset[-1,2]}. Thus, by the properties of measure, we have

\displaystyle 1 \leq \displaystyle\sum_{r} \mu(A)\leq 3

The first inequality gives us {\mu(A)>0}, whereas the second gives us {\mu(A)=0}, thus the contradiction with the assumption that {\mu(A)} exists. \Box

This is where our intuition about volume breaks: it’s impossible to formalize it so that it works for all sets. Now, it’s kind of clear that at least one of the pieces in the decomposition of the ball in the Banach-Tarski paradox has to be similar to the set {A} above, and that is where `volume conservation’ fails.

What people do to define the measure in a consistent way is to be very careful about the sets for which the measure applies. This leads to the ideas of {\sigma}-algebras and Lebesgue measure, which are the established formalisms of measure theory. There still exist sets that are not Lebesgue-measurable (the one constructed above is an example), but this is no longer an inconsistency of the theory; it’s a `weirdness’ of math.

Fundamental domains

This post is about the area of math known as general topology, and I’ll thus assume some basic background (for example, chapter 2 of the textbook by Munkres should be enough). The post was produces with a little modified version of Luca Trevisan’s latex2wp, and the source .tex can be found here. There is a bit of a hope that next time I’ll be able to make the math look better.

What made me write this was the Internet’s apparent lack of a simple but rigorous introduction to fundamental domains and their usefulness in taking quotients by group actions. The goal is, for a space {X} and group {G} acting on {X}, to establish sufficient conditions for the quotient {D/G} of a fundamental domain {D} (to be defined below) by the induced action of {G} to be homeomorphic to {X/G}.

1. A motivating example

Consider the space {\mathbb{R}^2} in the standard topology, and the (discrete) group {\mathbb{Z}^2} acting by translation: {(m,n)\in\mathbb{Z}^2} acts on {(x,y)\in\mathbb{R}^2} by sending it to {(x+m, y+n)}. Suppose you want to get an understanding of the quotient space {\mathbb{R}^2/\mathbb{Z}^2} with respect to this action. Basically, you want to take the real plane and, for each point, glue to it its orbit under {\mathbb{Z}^2}. Working with {\mathbb{R}^2} can make your head hurt, so here’s a trick: consider the unit square {D=[0,1]^2}. It almost contains a single representative of each orbit, except on the boundary. This gives us the much simpler gluing instructions

torus

The resulting space is the torus. The subspace {D} is known as a fundamental domain for the action of {\mathbb{Z}^2} on {\mathbb{R}^2}.

2. Group actions on topological spaces: a crash course

In topology, one often considers quotients of topological spaces by group actions (for example, in the beautiful theory of covering spaces, which you don’t really need to know to get this post). This is a natural extension of group actions on sets which takes into account the continuity of the topological spaces:

Definition 1 A topological group {G} is a space and group {G} such that

  1. The composition map {G\times G\to G} is continuous.
  2. The inverse map {G\to G} is continuous.

Definition 2 A topological group {G} acts on a space {X} if {G} acts on the set {X} and the corresponding action {G\times X\to X} is continuous.

To get some practice with these concepts, do these exercises:

Exercise 1 If {f:X\times Y\to Z} is a continuous map, show that for every {x\in X}, the map {f(x,\cdot):Y\to Z} is continuous.

Exercise 2 If {G} acts on {X}, show that for any {g\in G}, the map {f_g:X\to X} given by {x\to gx} is a homeomorphism.

3. Fundamental domains

One way to define a fundamental domain formally is the following (though we won’t use the full strength of this definition):

Definition 3 A fundamental domain is a closed subset {D\subset X} such that {X} is the union of translates of {D} under the group action:

\displaystyle \begin{array}{rcl} X= \displaystyle\bigcup_{g\in G}gD \end{array}

and such that {\mathop{Int}(gD\cap g'D)=\emptyset} for any two distinct translates.

What we ideally want to get is {D/G\cong X/G}. I know of no result stating that, and maybe there’s a counterexample. The following two propositions detail what I do know in terms of sufficient conditions:

Proposition 4 If :

  1. {X/G} is Hausdorff, and
  2. {D} is compact, or {D/G} is compact,

then {D/G\cong X/G}.

Proof: Observe that if we let {q,p} be the canonical quotient maps from {D,X} to {D/G,X/G} respectively, and {i} be the canonical inclusion {D\to X}, we have that whenever {a,b} are in the same {G}-orbit in {D}, they get mapped to the same element in {X/G}. Consequently, we have the following diagram:diagram

where {f} is continuous and unique by the universal property of the quotient. What does {f} do? Suppose we have an element {a} of a {G} orbit in {D}. Under {p\circ i}, {a} gets mapped to a representative of its {G}-orbit in {X/G}; but {p\circ i = f\circ q}. Hence {f} maps a representative of {a}‘s {G}-orbit in {D/G} to a representative of {a}‘s {G}-orbit in {X/G} (which is exactly the natural map we would expect to get).

Observe that {f} is injective: if {[a],[b]\in D/G} are such that {f([a])=f([b])}, it follows that there are some {a,b\in D} which go to the same element in {X/G}, and thus {a\sim b}, so {[a]=[b]}.

Moreover, {f} is surjective: observe that {p\circ i} is surjective, since {D} contains a representative of each orbit, and thus {f\circ q} is also surjective, which cannot happen if {f} fails to be surjective.

Thus, {f} is a continuous bijection {D/G\to X/G}. Since {D} being compact implies that {D/G} is also compact (being a continuous image under the quotient), and since {X/G} is Hausdorff, {f} is a homeomorphism. \Box

Going back to our example, obviously {[0,1]^2} is compact, and it’s easy to check that {\mathbb{R}^2/\mathbb{Z}^2} is Hausdorff.

Proposition 5 In terms of the notation of the previous proposition, {D/G\cong X/G} if and only if {p\circ i} is a quotient map.

Proof: Observe that {p\circ i} is a surjective continuous map {D\to X/G}, and that {D/G} is the quotient of {D} by the equivalence relation with equivalence classes – the subsets {\{(p\circ i)^{-1}(\{z\}) \ \big| \ z\in X/G\}}. Thus, Corollary 22.3 from Munkres applies to tell us that there is a homeomorphism {D/G\to X/G} \Box

This would hold, for example, if {p} is a closed map, since {i} is a closed map by {D} being closed in {X}, compositions of closed maps are closed, and every closed map is a quotient map.

Arbitrarily biasing a coin in 2 expected tosses

Here’s a neat probability trick that I learned from Konstantin Matveev and which, I think, everybody mildly interested in math should know about:

Problem. Given a fair coin, how do you (efficiently) generate an event E with probability 1/5?

Solution. We can, of course, toss the coin three times, giving us a total of 8 possibilities, then discard our least favorite 3 of them, and weigh the remaining  5 possibilities equally. This algorithm requires an expected number of tosses equal to 3\times 8/5=24/5. But, what if instead of 1/5, we have 1/1000000? You can easily see that the expected number of tosses to emulate a probability of 1/n grows logarithmically with n. But even worse, what if we had 1/\pi? Well, here’s a trick that gets rid of both of these problems: let

\frac{1}{5} = \displaystyle\sum_{i=1}^\infty \frac{a_i}{2^i}

for a_i\in \{0,1\} be the binary expansion of 1/5. Then, start tossing the coin until it lands heads, at some time I. If a_I=1, declare that E has occurred; otherwise, E has not occurred. Then clearly

\mathbb{P}[E]=\displaystyle\sum_{i=1}^\infty \mathbb{P}[E\ \big| \ I=i]\mathbb{P}[I=i]=\displaystyle\sum_{i=1}^\infty \frac{a_i}{2^i}=\frac{1}{5}

Furthermore, notice that \mathbb{E}[I]=2, regardless of the probability we want to emulate! Well, that seems pretty efficient. When you think about it some more, it really appears to be mind-boggling – you can emulate extremely small, or irrational, probabilities with just two expected tosses. Moreover, you don’t need to have the binary expansion of the probability in advance – you can pass the next digit depending on the status of your experiment.

Combining this with a standard unbiasing technique, say von Neumann unbiasing,, this gives you a very simple procedure that given a biased coin that lands heads with probability 0<p<1, allows you to simulate a biased coin that lands heads with probability 0<q<1 for any other q. Any binary source of randomness is convertible to any other such source.

But we haven’t said anything about the efficiency of unbiasing. There, we can’t do as well as in biasing: there is a fundamental obstacle, the information-theoretic limit. Roughly speaking, the amount of information a biased coin tells us is always strictly less than the amount of information we get from an unbiased coin – this is why biasing is easier than unbiasing. Fortunately, there is a procedure that lets us extract an unbiased stream of bits that on average achieves the best performance theoretically possible: see this paper by Mitzenmacher to learn more.

I guess the moral of all this is the following: if you’re stuck on a deserted island with \pi -1 other people, you need to decide who the first to be eaten is, and all you have in your random arsenal is a suspicious-looking coin handed to you by one of your shipmates, do not despair – you can still make sure you have a fair chance of surviving the day.

Reflections on “The library of Babel” and computational complexity

“…mirrors and copulation are abominable, because they increase the number or men.”

“Tlön, Uqbar, Orbis Tertius”, Jorge Luis Borges

You can tell that Borges was very fond of reflections, and now I intend to try to make him happy.

In short, the Cosmic Coincidence Control Center (and it seems that I’m included in that number?) was extremely busy last week. After finishing my first-ever short story, that feeble imitation of Borges, bearing the following arrogant dedication “While this story was being written, I thought I had stolen Borges’ style; but now I know – he stole my idea”, I was ruthlessly hunted down – so after all it was me who stole something, but hey, who is to say.

First, I decided to write my first paper for the science fiction class I’m taking (which is absolutely fun, thanks to this guy) on “The library of Babel”. OK, I can take that – after all, you might argue that I have free will and whatnot, so in fact it was not a coincidence.

Next, I randomly decided to watch a video by vihart called “Twelve tones”, cause, you know, it seemed to be her most popular one. And – bam! – there was “The library” again.

After that, I was even more randomly reading the chapter on randomized algorithms from the book on computational complexity by Oded Goldreich, and guess what, the quote at the beginning was:

I owe this almost atrocious variety to an institution which other republics
do not know or which operates in them in an imperfect and secret manner:
the lottery

Jorge Luis Borges, “The Lottery in Babylon”

I know, it’s not a library, it’s a lottery, but a lottery is just the closest equivalent of a library to people doing randomized algorithmis – after all, a bunch of monkeys randomly typing on a bunch of typewriters will produce the works of Shakespeare at some point. And a Babylon is like a baby Babel anyway.

Finally, it turned out that the book I blogged about last week, “Orphans of the sky”, is way too similar to “The library of Babel” – something I realized only after re-reading the library (or rather, “The library”. haha). It’s not just that both things came out in 1941 (yeah, I don’t know, it’s crazy), but they both construct extremely similar settings, visually and conceptually. Read them and you’ll know – don’t want to spoil anything!

All in all, it was pretty obvious that Borges was after me, and that he wouldn’t leave me alone unless I wrote something about the library and about computational complexity. So here we are now.

What is this library anyway? The premise of the story is simple enough: a library which contains all possible books 410 pages long, conveniently stacked in a seemingly infinite array of identical hexagonal galleries, which comprise all the world. It has the complete works of Shakespeare, the biographies of all people that have ever lived on Earth, the proofs of a bunch of conjectures in mathematics, these same proofs with the last line wrong, “The library of Babel”, etc. Sure, it’s a big place. It also has people randomly walking up and down and thinking they have it all figured, arguing that, you see, a pentagonal gallery would be fundamentally impossible, so that’s why galleries are hexagonal.

But I don’t really want to talk about the social metaphors of the library (a decent subject in its own right); rather, I like to think of it as a representative of a somewhat underrepresented part of SF, something you might reasonably call “math fiction”.  Borges wrote several other stories with a strong flavor of mathematics – “The Aleph”, “The garden of forking paths”, “Blue tigers”, “The book of sand”, to name a few amazing ones.

Is MF SF? I would argue that it is, for:

1) math is as good a science as any of your usual ‘favorites’ in SF – physics, chemistry, biology – and in fact, it is the language underlying all of them, a language of even greater expressive power

2) yes, all the ‘falsifiable hypothesis blabla’ stuff does apply to mathematics, and in fact, modern mathematics seems to rely more and more on simulations and experiments

3) MF has already sneaked in SF: there are works that arguably classify as MF which have won a bunch of awards. I know for I’ve read one such – “Permutation city” by Greg Egan, which I strongly recommend to people interested in the computational aspects of consciousness.

YAY MATH FICTION! So, “The library of Babel” uses a very simple mathematical idea – “the set of all sequences of a given length, in a given set of symbols” – to achieve very interesting and complicated effects, and that makes it great math fiction. Suppose you wanted to write a book, and you had some reasonably good idea of what you want it to be about, and you knew it wouldn’t be longer than 410 pages. It then seems very plausible that, if someone hands you a book and you read it, it will be qualitatively easier for you to tell if that’s the book (or a book) you want to write. Then, if you just go to the library and read all books (for there is a very big, but finite number of such books), you will finally find one that suits you!  So you’ll have achieved a qualitative improvement by increasing your efforts only quantitatively. Essentially, it might seem that you’ve written a book without writing it!

This has two consequences: one philosophical, one computational. First, is an author just a treasure-hunter? Does an author create a work, or has the work been there all the time, and the author is `merely’ the one who found it? What the hell?

But hey, that’s not a big deal. What if we try to write books in the way described above? What if we try to do math the way described above – if we want to prove a theorem, we just go through all possible proofs of a given length, for all lengths, until we find one that works? Then mathematical discovery will be more or less fully automated! Ideas of the sort motivated the computational revolution that was just starting at the time Borges wrote his story, and they shape much of modern computational complexity theory.

As for the point of the above example in this context – we might need some new, more practical definitions of quantitative and qualitative differences after all. Especially, when you’re searching for something, going through all possibilities should count as qualitatively more expensive than looking at a single one – and that’s some intuition for where the distinction between polynomial and exponential time in computer science came from. Here’s a nice paper on that topic that I don’t really understand (yeah, I don’t really understand either): Why Philosophers Should Care About Computational Complexity

Reflections on “Orphans of the sky”

While the book is still fresh in my mind (it’s about 1 hour, or 60 minutes, that is, 3600 seconds, behind me). You know a science fiction book is good (to you) when it constructs curious ideas and situations you haven’t ever imagined before (which are of course made possible by some kind of, well, technology; otherwise it wouldn’t be much of a SF work; or would it?). Another way you know a science fiction book was good to you is when you read it, and then (of course) go to wikipedia, see when the thing was written, and be like “What? I thought it was written in the 70s or something…”.

“Orphans of the sky” by Heinlein was good to me in both respects. If I had to summarize the insight I gained from it in a sentence, it would roughly say this. The concepts of ‘humanity’, ‘human nature’ and ‘common sense’ are highly dependent on, and extremely, short-time-scale flexible with respect to, the knowledge passed from parents to children.

And here’s a quote that is both representative of this idea and a sort of motivation for the study of general topological spaces as opposed to metric spaces (What? What did I just say?…):

Metrical time caused him as much mental confusion as astronomical distances, but no emotional upset The trouble was again the lack of the concept in the Ship. The Crew had the notion of topological time; they understood “now,” “before,” “after,” “has been,” “will be,” even such notions as long time and short time, but the notion of measured time had dropped out of the culture. The lowest of earthbound cultures has some idea of measured time, even if limited to days and seasons, but every earthly concept of measured time originates in astronomical phenomena; the Crew had been insulated from all astronomical phenomena for uncounted generations.

Yossarian lives!

“They’re trying to kill me,” Yossarian told him calmly.
“No one’s trying to kill you,” Clevinger cried.
“Then why are they shooting at me?” Yossarian asked.
“They’re shooting at everyone,” Clevinger answered. “They’re trying to kill everyone.”
“And what difference does that make?”

Joseph Heller, “Catch 22”

A METAPHORICAL SEARCH ENGINE.

   A METAPHORICAL SEARCH ENGINE.

      A METAPHORICAL SEARCH ENGINE.

It’s not a metaphor for a search engine. Get it? It’s a thing that finds metaphors for you. And it’s called “Yossarian Lives!”. Now how cool is that. I saw this sometime this summer, and recently the alpha’s image search has become operational. I tried it briefly and I have to admit the images it returned were pretty crazy and I could see a meaningful connection to my query in only a couple of them – but still, it was an interesting analogy. I was yossarianlives!ing for “moon” and I got devils. I then realized the shape of their horns resembled that of the moon crescent and was “wow”. Of course, there might have been many other connections I missed, due to the Stephen Fry problem (oh man I love these guys, they like everything I like).

Is the Stephen Fry problem just a convenient excuse? Is the final version going to be much better than the alpha? Are the images returned plain random? The future will show. However, the idea itself is amazing. “Outsourcing our minds”, bla bla bla. Shut up. Nothing (or at least, nothing yet) can outsource your mind, it can only inspire you to think deeper. Yossarian Lives!, if it lives up to its promise, will be a free, automated, non-stop service for blowing people’s minds. Remember the good old hanging around and not thinking about anything, just staring at the emptiness, when suddenly an amazing idea flashes through you mind, and you’re like “OMG THIS IS SO EPIC HOW COULD I NOT SEE THE CONNECTION BEFORE”? No more waiting for it to randomly happen – you just go to Yossarian Lives!, strike a couple of keys and be like “Wow. Wow. Wow. Wow….” for hours. An intellectual bomb. A mind-bender. In terms of a simple metaphor (haha), if the thing works as promised, we’ll have

\frac{\text{Yossarian Lives!}}{\text{hanging around waiting for a flash}}=\frac{\text{taking the roller coaster}}{\text{walking to grandma's house}} \ \ \ (1)

Is (1) good? Is it bad? Is it unnatural? Well, it’s tempting, and it could bring great change to the way people think about the world. And I think this is always good. If you don’t like it you just don’t use it.

Some other arguments the creators bring up (as if (1) is not enough) can be found in this very interesting essay. A great one is the following: search engines nowadays are, by nature, predicting their users and pointing them either to knowledge the majority of other people found useful (like, autocompletion), or to knowledge that is similar to what they searched for before. This may have many obvious advantages, but they come at a price: your knowledge horizon becomes conformist and hard to change. Search engines are trying to kill everyone. So they’re trying to kill you! It’s a pretty unfortunate Catch-22, isn’t it? On the other hand, the more subjective nature of the experience of understanding a metaphor has the potential to turn all this around (and confuse entire nations, I’m guessing, but there’s no other way).

Another reason why I find all this amazing is that this project seems to lie at the intersection of… everything. You notice this post is in almost all categories on my blog, as well as in Meta. Yossarian Lives!’s very heart is an objective process that seeks highly subjective results. The idea relates mathematics with art, determinism and rigorous theoretical ideas with the mind’s inner, sometimes seemingly arbitrary, associations and feelings. All fields of knowledge are basically clustered around these two poles, which often leads to people entirely dedicating themselves to one and forgetting the other, and consequently to lack of communication, understanding, and, I’m pretty sure, many interesting ideas. Well, I believe the poles are much closer than they appear to be to most people, and this project has a great potential to make me more right (:

So, what are you waiting for??? Go ahead and try it!

mathbabe

Exploring and venting about quantitative issues

Rational Altruist

Adventures of a would-be do-gooder.

Gowers's Weblog

Mathematics related discussions

What's new

Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao

Stefan's Blog (archive, for news see blog.krastanov.org)

Most of the blog moved to blog.krastanov.org