The Spectre of Math

December 15, 2015

Esperanto and Math Textbooks, #EsperantoLives

Filed under: Esperanto,Mathematics,Teaching — jlebl @ 12:01 am

I haven’t made a post in quite a long time, and I was thinking I’ll take the opportunity of the #EsperantoLives campaign to make one.  I want to focus on Esperanto’s usefulness for math, specifically for access to math education in the developing world (and some tangential topics).  Perhaps a longer post than most this is an informal essay on the subject.

What is Esperanto?  It is a created language, on purpose made very regular, easy to learn, and generally made to avoid ambiguities and idioms and things that make communication between cultures difficult.  It is also intended to be a second language for everyone, giving no one an unfair advantage.

Let’s talk a bit about math textbooks, something which those that know me, know is a favorite topic of mine.  This is really about college-level textbooks, though some of it applies to lower levels.  Nowadays if you look on Amazon to look for the new version of Stuart’s calculus, it is something like $300.  It’s a great book, but while that price might be sort of OK for a middle class American or European, it doesn’t cut it when you talk about poorer students, and most definitely not students in developing countries.   There has been quite a push recently for high quality free textbooks to partially attack the problem.  That is, low cost books for the poor Americans and some Europeans; there’s quite a choice of math textbooks at the college level in English which are free.  Even some poorer countries benefit.  My textbooks are used in places like Tanzania or India.  But what if you happen to live in a part of the world that does not speak English?  The developing world where English is not spoken is at a huge disadvantage in terms of their education.  It is very difficult to translate textbooks into every possible language and so those students from large rich countries will always have access to more.

There are two choices.  A short term (and far from ideal) solution is to push for say English education in the developing world so that they have better access to educational materials.  A poor country that does not speak English nowadays is in a big disadvantage if it wishes to grow the ranks of its university educated.  But, even if those students learn English, their command of the language is likely to be poor, and given that English (like any natural language) is prone to being vague, that compounds the problem.  Especially for technical fields like math, science and engineering, where precision is paramount.

The second choice (much more long term) is to move to Esperanto (or a similar language).  There are several advantages.  Since it is meant to be a second language for everyone, nobody is at a disadvantage.  It can be mastered quickly and it avoids ambiguities, meaning understanding materials is less of a problem.

This may seem to assume universal adoption as a second language which is probably not realistic even in a very far future.  But we don’t need to get it perfect.  We don’t even need to get close.  We just need to get closer.  If e.g. EU decides on starting to push Esperanto as a common language, it would be enough.  Creating educational materials for higher education is sufficiently a niche, that it is hard for even smaller rich countries to cover all the bases.  If majority of Europeans spoke Esperanto at some point, educational materials could be easily shared.  But it would also mean that other developing countries get to use the work if they start teaching Esperanto.  Just like currently countries where English is spoken to some degree are able to take advantage of the wealth of material now.

You might say it is a naive unrealistic dream, … perhaps.  You might say that English is the “lingua franca,” but that’s really only true in the western world.  English is not even the most spoken language in the world.  Furthermore, I am not talking about the next 10 years, not the next 50 years, and perhaps not the next 100 years.  Around 70-80 years ago, there could be a good case made that French would not be displaced as the “lingua franca” by anyone.  And 100 years ago, French was clearly the international language without an argument.  200 years ago, Latin was still used as the “lingua franca” in science and medicine.  So things that might seem immutable, unchangeable, can in fact change in a few decades.

Finally, perhaps more tangentially, Esperanto would be far better for science.  Most international science is nowadays done in English, but from my own experience, there are many good even world renown mathematicians whose English is quite sub-par.  Many mistakes enter the literature, many results are ignored or lost, because the right person couldn’t quite read or write English well enough.  And remember up till 1800s, all math was done in Latin.  Then up till the 1960s it was very common to see German and French.  Russian was commonly used even later than that.  And there are still many publications in national languages.  The language used can change within a generation or two. Because Esperanto is easy to learn, if it starts making inroads into science it could take over much quicker than English did in the last half a century.

What can we in the rich developed world do?  We can learn Esperanto, and help create more texts and educational material in Esperanto (among other things).  We have the luxury to do so.  I myself plan to do some translating of my books to Esperanto at some point once I gain more confidence that I am writing good Esperanto, not just passable Esperanto.  And long term, we will fare much better with a more connected and generally richer world if we do.  Think about how much we are putting into medicine and technology, while large parts of the world are simply trying to survive.  What if every country could produce and then employ great scientists and engineers in the same quantity as we do.

So how did I get into Esperanto?  I heard of Esperanto and the idea behind it a long time ago and was always thinking about learning it, but have only started learning recently once it came out on Duolingo.  That seemed to be the only way to keep me motivated.  I’ve been learning since end of May this year, and by now I can read books, magazines, hold an online conversation in Esperanto.  I could probably hold a live conversation as well, though I’ve not tried yet.  On the other hand I’ve been on and off learning French for basically let’s say the past two decades, including very actively the past year.  I have so far failed to get to any sort of usable level.  So, Esperanto definitely is a lot easier to learn.  Both from a point of view of grammar (simple grammar with no exceptions), and from the point of view of vocabulary (lots of words are put together from fewer basic roots).

Kaj tio estas ĉio.

November 26, 2014

Law of large numbers: idiots, monkeys, CEOs, politicians, and nuclear war

Filed under: Economics,Mathematics,Politics — jlebl @ 1:26 am

Something that seems to be ignored by many people is the law of large numbers.  Suppose you take an action that has 99% rate of success.  That’s 1% chance failure.  Tiny!  Well do it enough times, and failures will happen.  Given enough candidates with 1% chance of winning, one of them will win.  Then everybody is surprised (but shouldn’t be).  Suppose that in the 435 seats for congress, there were all candidates that according to polls had 99% chance to win, and there was always a second candidate with 1% chance of winning.  I would expect 4or 5 of the underdogs to win.  If they didn’t we were wrong about the 99%.

Or how about entrepreneurs.  Suppose you take 100 idiots.  They each get a totally whacky idea for a new business that has 1% chance of success.  One of them will likely succeed.  Was it because he was smart?  No, there was enough idiots.  We should not overvalue success if we do not know how many other similar people failed, and how likely was success.  What if you have a person who started two businesses that had 1% chance of success.  Was that person a genius?  Or did you just have 10000 idiots.  You have surely heard that giving typewriters to monkeys will eventually (if you have enough monkeys and time) will produce works of Shakespeare.  Does this mean that Shakespeare was a monkey?  No.  There weren’t enough idiots (or monkeys) trying.  Plus the odds of typing random sentences, even if they are grammatically correct, and ending up with something as good as Shakespeare are astronomically low.  Shakespeare was with a very very very high degree of confidence not a monkey.  I can’t say the same for Steve Jobs.  The chance of Jobs having been a monkey are still somewhat smaller than your general successful CEO.  Think of the really important decisions that a CEO has to make, there aren’t that many.  If we simplified the situation and went simply with yes/no decisions on strategic things, there are a few in the lifetime of a company.  Most decisions are irrelevant to the success, and they even out: make a lot of decisions that make a small relative change and you will likely be where you started (again law of large numbers).  But there are a few that can make or break a company.  Given how many companies go bust, clearly there are many many CEOs making the wrong make or break decisions.  So just because you hired a CEO and he made a decision to focus on a certain product and drop everything else, and you made it big.  Does it mean your CEO was a genius?  Flipping a coin then gives you 50% chance of success too.

Same with stock traders.  Look and you will find traders whose last 10 bets were huge successes.  Does it mean that they are geniuses?  Or does it simply there are lots of stock traders that make fairly random decisions and some of them thus must be successful.  If there are enough of them, there will be some whose last 10 bets were good.  If it was 10 yes/no decisions, then you just need 1024 idiots for one of them to get all of them right.  They don’t have to know anything.  Let’s take a different example, suppose you find someone that out of a pool of 100 stocks has for the last 3 years picked the most successful one each year.  This person can be a total and complete idiot as long as there were a million people making those choices.  The chance of that person picking the right stock this year is most likely 1 in 100.  Don’t believe people trying to sell you their surefire way to predict the stockmarket, even if they are not lying about their past success.

OK.  More serious example of law of large numbers: Suppose your country does a military operation that has 99% chance of success and 1% chance of doom to your country.  Suppose your country keeps doing this.  Each time, it seems it is completely safe.  Yet, eventually, your country will lose.  You start enough wars, even with overwhelming odds.  Your luck will run out.  Statistically that’s a sure thing.  If you want your country to be around in 100 years, do not do things that have even an ever so tiny chance of backfiring and dooming that country to failure.  You can probably guess which (rather longish) list of countries that I am thinking of, which with good odds won’t be here in 100, 200, or 500 years.

Let’s end on a positive note:  With essentially 100% probability humankind will eventually destroy itself with nuclear (or other similarly destructive) weapons.  There seem to be conflicts arising every few decades that have a chance of deteriorating into nuclear war.  Small chance, but positive.  Since that seems likely to me to repeat itself over and over, eventually one of those nuclear wars will start.  It can’t quite be 100%, since there is a chance that we will all die in some apocalyptic natural disaster (possibly of our own making) rather than nuclear war.  Since there is also a small chance that everybody on earth gets a heart attack at the same exact time.  Even if we make sure we don’t do anything else dangerous (such as nuclear weapons), civilization will end one day with a massive concurrent heart attack.

July 12, 2013

MAA reviews, HTML versions, new sections in RA book …

Filed under: LaTeX,Mathematics,Teaching — jlebl @ 7:54 pm

Reviews

MAA has done reviews of both of my books: see here and here.  By the way, now they have been downloaded (at least the PDF) each from over 40k distinct addresses (approximately 83k together now).  Since it seems the web version of the diffyqs book is probably more popular than the PDF, there is probably another as many people who’ve used that.

HTML version of the DiffyQs book

Speaking of the HTML version.  After last release of the diffyqs book, I’ve worked a bit on the HTML conversion.  The result is using tex4ht for conversion and then a Perl script to clean up the HTML.  This is very very hacky, but of course the main point is to make it work rather than do it cleanly.  One of the things I’ve done was to render all math at double the resolution and let the browser scale it down.  Then to make things go a bit faster I’ve made the code detect duplicate images of which there are quite a few.  I’ve also been testing with data URIs for very small images, but they don’t quite work right everywhere yet.  They would cut down on the number of requests needed per page and surely eventually I’ll do that.

The supersampling has both positive and negative effects.  Printed version of the HTML now looks a lot better.  Not totally great since I currently have things render at around 200dpi rather than perhaps 300dpi, but it’s a reasonable compromise.  Also high resolution displays give nicer rendering.  The downside is that on a regular display the equations are fuzzier due to lack of hinting.

Of course MathJax would be the ultimate answer to the math display and that’s the ultimate goal, but I can’t make it work with tex4ht reasonably nice.  I am very picky about the display being 100% correct even if uglier, over being 90% correct and pretty.  Every suggestion I’ve tried so far was very subpar on output.  I can’t make tex4ht not touch all math.  Even then MathJax does choke on a few expressions I have in the file so things would require more tweaking to make it all work.

The requirements for math display I have is 1) I want to make sure that the same font is used on all math (that’s why I render all math as images).  2) I want the output to be correct and readable (which totally disqualifies MathML since even newest versions of all browsers do terrible jobs on all but the simplest equations, and even there).  3) I want the thing to be usable on as many browsers as possible.

I think eventually the solution would be to write my own tex parser that can read the subset of latex I use for the book and output HTML pages using MathJax.  This sounds simpler than it is.  That is, getting this to work on 90% of the input is easy, then things like figures, and certain math constructions get in the way.

Another possibility is to output svg instead of png for math using dvisvgm.  This keeps the problem of fuzziness on standard displays, but is really pretty when printed or on high resolution displays .  The downside is bad support (only very new chrome and firefox support this somewhat and even they have issues, and it crashes my android phone).  I think MathJax is a better long term solution, but it will take some work and probably a move away from tex4ht.

New sections in the analysis book

Something I have not mentioned here when it happened is that the analysis book got a bunch of new sections recently (the May 29th version).  These are all extra optional sections to fill up a longer version of the course (dependencies if any are marked in the notes at the beginning of each section).  There is a section on

  • Diagonalization argument and decimal representation of real numbers (1.5)
  • More topics on series (2.5)
  • Limits at infinity and infinite limits (3.5)
  • Monotone functions and continuity (3.6)
  • Inverse function theorem in one variable (4.4)
  • The log and exp functions (5.4)
  • Improper integrals (5.5)

I am currently working on multivariable chapter(s) that would come after chapter 7.  This will take some time still, I have about half of the material in a very rough draft, having massaged bits of my Math 522 notes into something that more fits this book.  My plan is for the book to be usable for a standard one year course on real analysis.

June 25, 2013

New Genius out (1.0.17)

Filed under: Hacking,Linux,Mathematics — jlebl @ 9:18 pm

Release a new Genius version (1.0.17) today.  See the website.

Main new thing is that it is using Cairo to draw now thanks to update to new GtkExtra.  At some point I should package up and document all the fixes/changes I’ve made to GtkExtra and submit them upstream, it seems that GtkExtra is now alive again.

May 28, 2013

Computation update

Filed under: Hacking,Mathematics — jlebl @ 4:45 pm

On the scale of length of computations I’ve done, this probably counts as the longest so far.  If you look down a bit in the blog you’ll find the details.  I can now report that there is no degree 21 polynomial p(x,y) with positive coefficients, with exactly 12 monomials (the least it can have), such that p(x,y) = 1 whenever x+y=1, and such that xy is one of the monomials.  Now the conjecture is that there is only one such beast (up to switching variables), dropping the condition about xy, and the computation is well on its way to prove that.  That one monomial is a bit special since it appears in these sharp polynomials for a bunch of smaller degrees.  Anyway, a few more months and we’ll have the answer.

March 29, 2013

Math is a series of trivial observations

Filed under: Mathematics,Personal — jlebl @ 4:38 pm

Mathematical proof is essentially a series of completely trivial observations wrapped in complicated-sounding notation (not complicated on purpose hopefully). The trick is not to understand the proof once it is written, but to notice those trivial observations to write a proof in the first place. I think this is what’s sometimes discouraging people from research mathematics. You work for two weeks on something that feels like a very hard problem, and then the solution seems trivial once found. In my case there are two operations and a limit involved. And the things you are trying to bound are not continuous with respect to that limit, so you flail around trying to do all sorts of complicated schemes. Then last night I think … hey why not do these two operations in reverse. I get rid of the limit and the problem becomes almost trivial after a bit of linear algebra. It feels good. But on the other hand it feels like: Why didn’t I think of this two weeks ago.

February 7, 2013

The computation

Filed under: Hacking,Linux,Mathematics,Technology — jlebl @ 4:47 pm

So the computation has finished (actually a few days ago) for degree 19. I’ve only yesterday gotten around to finishing a short paper (addendum) to post to arxiv, which I’ve done yesterday, see arXiv:1302.1441. The really funky thing is that there are so many sharp polynomials in degree 19. Up to symmetry there are 16 in odd degrees up to degree 17, yet there are 13 in degree 19. And two of the new ones are symmetric, which is actually surprising, that seems it should be hard to achieve if you think about how they are constructed. There’s probably a bunch of interesting number theory that appears here. It should be fun to figure out what’s going on there.

This was the first time a paper of mine got reclassified to a different archive on arxiv. I put it into algebraic geometry because well, the motivation comes from geometry, but it got stuck into comutative algebra. Which actually makes a lot more sense. Especially since none of the motivation from geometry appears in this writeup.

Degree 21 has been running for about a week. It will probably be running for the next year or so at which point I really expect it to just spit out only one polynomial which is the group invariant one we already know about. Which would be also kind of funky since then there would be two degrees with as few polynomials as possible and in between there would be a degree with the most polynomials we have found so far in any degree.

January 21, 2013

The correct finite field does wonders

Filed under: Hacking,Mathematics,Technology — jlebl @ 5:26 pm

So the computation I was running since thanksgiving was getting nowhere (finding degree 19 polynomials which are equal to 1 on x+y=1, which have fewest possible terms (11) and which have only positive coefficients, we know that there are finitely many, the trick is to find them). My code didn’t really have good status updates. Well it does tell you exactly where it is, but it takes a bit of computation to really see how it’s progressing, so I didn’t do that until a few days ago since I thought it was going way too slow.

And low and behold, it was going way too slow. I computed that it should take another 3-7 years or so (different parts of the search space take different times so it’s a bit hard to estimate). That was a bummer. That was about 10 times longer than I thought it should be. At first I was a bit resigned to this reality, but the next day I started to look into it, and one thing I figured out after running a bunch of tests was that one shortcut I was using was never triggered. The idea is that we need to find when a certain matrix with integer coefficients has 1 dimensional nullspace. Doing the integer row reduction is done with gmp, and is reasonably fast. But since we do this many times over and most of these matrices are simply full rank (no null space), we don’t really need to do the whole computation. So what we can do (and this is a useful trick to know), is to first do all computation in some small finite field, e.g. do everything mod p for some small prime p. If a matrix is full rank mod p, it is full rank. The computation can be done rather quickly this way and you don’t even have to involve modulo computation, since all the possible computations you can just precompute first and just build up a big table, so instead of two multiplications, an addition, and finding the remainder, you just look up in a table. Anyway, that gets us quite a bit of a speed up.

Now the thing is that I was using mod 19, since that worked for lower degrees. One thing I forgot when I started the run (remember this was a few years since I looked at this code and ran it last time), is that the modulus cannot be the same as the degree. The matrices we need to work with have most terms divisible by the degree. So moding out by 19 essentially always made the matrix all 0 (except for a few 1s scattered around). So these matrices were essentially always singular and the shortcut never triggered. So after doing a useless mod 19 calculation we had to do the actual integer arithmetic. That’s why it was slow. Damnit!

Well the calculations were not wrong, I just did a lot more computation than needed. After a small amount of testing it seemed that mod 23 was a good finite field to proceed in, so I restarted the code. Suddenly 3-7 years turned into first estimating 90 days and after running things for a day, that turned into an estimate of 30 days.

Then I noticed one more thing (and Danny pointed this out too), that his code used symmetry and just threw out half of the nonsymmetric polynomials, since the computations are the same. I remembered that my code didn’t do it. It didn’t make much sense if the longest run we did was 5 days on one core (for code that is only ever run once or twice, small speedups are somewhat pointless). I implemented this idea and it seems to achieve 33% reduction in time (there’s still the checking for symmetry, and there are of course symemtric polynomials, so that’s probably close to where we can get). So anyway, I guess within 20 days we should have the answer.

After it finsihes, I still have one more speedup up my sleeve. It could be that I can do the row reduction really fast mod 2 by using binary operations (each row would be an unsigned int). Not sure what speedup I can achieve though, at best 90%, since that’s how many cases mod 2 catches. While mod 23 or so catches essentially everything. So the idea is to do mod 2, then mod 23, and only then if the matrix is still singular do the integer arithmetic. If the speedup is another 50%, and my most optimistic estimates hold, that would put degree 21 within the realm of possibility, though at least half a year on 4 cores. That’s is, within something I’d be willing to run.

So, the mood went from “I’ll probably give up n d=19 soon” to “maybe d=21 is possible”. All this just by using a different prime :)

December 19, 2012

Interesting calculation

Filed under: Mathematics,Teaching — jlebl @ 5:02 pm

An interesting back of the envelope calculation: UC Irvine is using the differential equations book as standard, that’s a couple of hundred students every quarter, and there are at least a few other places with large and small lectures using the book. A reasonable estimate at the current adoption, there are at least 1000 students every year who use the book. I recently checked on amazon how much does Boyce-DiPrima cost: the new edition is $184 (Yaikes!). It costs $110 just to rent for a semester (a used version was $165). That’s 110-184 thousand dollars a year saved by students, just because of one open book adopted in a couple of large lecture classes. Presumably, the adoption rate can only go up, so this number will go up, just from one of my books (savings from the real analysis book are going to be much smaller due to less students taking that kind of class).

Now I have nothing against the publishers, but they have their incentives wrong. Boyce-DiPrima is a fine book, but … There is really no reason to print big bulky books on expensive paper for these classes. Locally printed coursepacks or cheaply printed paperbacks are much more efficient. And the students might actually keep their books and they might help them along in other classes. Currently most students return their books as soon as they can to recover most of the cost. So if you’re teaching say a PDE class as I did this semester, you can hardly tell them to brush up on their calculus from their calculus book. They don’t have it anymore!

The incentive for me is to simply make the best book because I want to (makes me feel good, that’s I guess all I can expect). Since I make almost no money on it, I don’t really have to inflate page count just to make it more expensive, or add color and pretty pictures just for the hell of it (it would make the book quite a bit more expensive). Plus the book is more accessible. I already know students use the web version even from their phones.

So anyway, I guess I’m providing at least 2 to almost 4 times my salary in free books.

Anyway, if you do want to buy the books (and I make $2.50 on each, yay! I made almost $400 so far, it’s mostly for making me feel good rather than making money), here are the links to lulu:

Real Analysis, $13.09 + shipping

Differential equations, $16.78 + shipping

Yes, I’m a fan of arbitrary looking prices. Actually the reason for those prices is that I simply set the price so that I get $2.50 from the book, so it’s the (cost of printing)+2.50+(lulu’s cut).

December 17, 2012

New versions of books and new genius

Filed under: Hacking,Mathematics,Teaching — jlebl @ 11:47 pm

So in the last two days I’ve put up new versions of both the differential equations textbook (3 new sections, and of course some fixes) and the real analysis textbook (fixes, plus 4 new exercises). And also I’ve made a new release of genius. Actually two of those I just did today when my students were taking their final. The nice things about proctoring tests with small upper division classes is that you can get stuff done. There is no cheating going on. There’s only a few questions, so I had over two hours to burn. Next semester will be quite different. I’ll have two calculus lectures with 250+ students each. Proctoring an exam for that many students is not at all a relaxing exercise (and then there’s the grading … ugh!)

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 25 other followers