Search

Thursday, December 09, 2010

Friday, October 15, 2010

I foolishly tried to convert a matrix M to a vector using vector:

vector(M)

But this is done column wise, so that adjacent row items end up non-adjacent.

The right way to do this seems to be:

library(data)
unmatrix(M, byrow=TRUE)

Wednesday, September 29, 2010

Mike's and my book is coming out


Our book
is finally coming out:




You can buy it on
Amazon.com, Amazon.de, or Springer.com.

tilde's in URLs (LaTeX)

Obscure LaTeX command:

\textasciitilde{} % for tilde's in URLs as text.

I'd been using $\sim$.

Saturday, September 25, 2010

Public and private cv's (LaTeX)

Not directly related to statistics but:

Often one wants to have a public cv that one can put on the web, and a more restricted one that has private information that one only needs for a job application or something. Instead of maintaining two cvs, there's an easy way to automate it if you are a latex user.

1. For a public cv, type:

## public
pdflatex vasishthcv.tex

2. For a restricted cv, type:
## restricted
pdflatex -jobname vasishthcv "\def\UseOption{opta}\input{vasishthcv}"

where in the tex file, you have in the preamble:

\ifx\UseOption\undefined
\def\UseOption{optb}
\fi
\usepackage{optional}

and

in the text itself for restricted sections use:

\opt{opta}{Home address:...}

Friday, December 11, 2009

Statistics in linguistics

People in linguistics tend to treat statistical theory as something that can be outsourced--we don't really need to know anything about the details, we just need to know which button to click.

People easily outsource statistical knowledge in an empirical paper, but the same people would be appalled if they hired an assistant to work out the technical details of syntactic theory for a syntax paper.

The statistics *is* the science, it's not some extra appendage that can be outsourced.

Thursday, April 23, 2009

How to get ESS style indentation in textmate

This should be standard in Textmate, I don't know why one has to go through so many steps to get it working:

http://gragusa.wordpress.com/2007/11/11/textmate-emacs-like-indentation-for-r-files/

How to update R bundle in textmate

Got this from the web somewhere:

Just create a script with the following content:


#!/bin/sh

LC_CTYPE=en_US.UTF-8
SVN=`which svn`

echo Changing to Bundles directory...
mkdir -p /Library/Application\ Support/TextMate/Bundles
cd /Library/Application\ Support/TextMate/Bundles

if [ -d /Library/Application\ Support/TextMate/Bundles/R.tmbundle ]; then
echo R bundle already exists - updating...
$SVN up "R.tmbundle"
else
echo Checking out R bundle...
$SVN --username anon --password anon co http://macromates.com/svn/Bundles/trunk/Bundles/R.tmbundle/
fi

echo Reloading bundles in TextMate...
osascript -e 'tell app "TextMate" to reload bundles'

Wednesday, July 04, 2007

Selection bias in journal articles

Journals dealing in psycholinguistic research do not publish null results generally, because they are "inconclusive". So it's completely possible that out of 100 experiments, 95 are inconclusive, and 5 are "significant", but that all five are Type I errors. But it's those 5 experiments that will get published.

The naive rebuttal to this would be that such a situation would only rarely arise. But the non-obvious thing is that rare events do happen. If we published only those five articles, then how would we draw the conclusion that we are not in Type I la la land?

Saturday, April 28, 2007

Rlang mailing list

Roger Levy has created a possibly useful wiki for exchanging questions about the use of R for language research:

https://ling.ucsd.edu/mailman/listinfo.cgi/r-lang

Tuesday, April 17, 2007

How to extract SEs from lmer fixed effects estimates

Extracting fixed effects coefficients from lmer is easy:

fixef(lmer.fit)

But extracting SEs of those coefficients is, well, trivial, but you have to know what to do. It's not obvious:

Vcov <- vcov(lmer.fit, useScale = FALSE)
se <- sqrt(diag(Vcov))

Saturday, February 17, 2007

Hmisc: how to increase magnification

One non-obvious thing (at least to me) about Hmisc's xYplot function is that to increase magnification or other parameters of a graph component, you have to do the following.

xlab=list("Condition",cex=2)

I.e., you have to make a list out of the parameter, and add whatever information you need. This works generally for any of the xYplot parameters.

Thursday, January 25, 2007

using winbugs with gelman and hill book on intel macs

I finally installed Windows on my Mac (a traumatic experience) and finally got the code working. However, the startup instructions on the website of the book did not work for me. I offer a working example for other souls as clueless as myself. The first problem is that the libraries have to be installed manually, they do not install automatically as adverstised. Second, the library R2WinBUGS has to be called explicitly to run the critical bugs command.
Also, if anyone out there is thinking of installing a dual boot environment in Mac in order to install WinBUGS, there is a bug (no pun intended) in the licence installation of WinBUGS. The decode command for the license does not work as advertised, but the license installs anyway.
The working version is here: http://www.ling.uni-potsdam.de/~vasishth/temp/schools2.R

Monday, January 22, 2007

Some expensive lessons I recently learnt about R/Sweave

1. If you are going to generate lots of latex tables automatically from an Rnw file, LABEL THEM.

2. weaver does not work with xYplot. If you are using the Hmisc library, just don't use weaver. I will present a solution here sometime soon.

The solution: set caching to off (cache=off) in the chunk that loads the Hmisc library and runs the xYplot command(s). You can turn caching on before and after the chunk, but xYplots need to be computed without caching.

3. xtable is unable to identify the fact that an R output line containing, e.g., log(sigma^2), has to be in math-environment in the tex. In Sweave this has the disastrous consequence that the .tex file does not compile. My kludgy solution is to search and replace the .tex file after Sweaving it.

It's frustrating that such good tools can sometimes be such a pain in the ass. I guess one should be grateful they are there at all.

Saturday, January 13, 2007

Incomplete Review of Gelman and Hill's Data Analysis using Regression and Multilevel/Hierarchical Models

I'm writing this somewhat cranky review as I read the book. Compared to the Pinheiro and Bates book, the examples in this book are initially irritatingly difficult to get working. A major problem with the book is that code involving BUGS runs only on Windows. This excludes readers like me from the action. So I have to wait until I get a Windows machine--but do I really want to start using Windows now? It would have been more helpful if their webpage prominently mentioned this detail (that the book is Windows specific). Had they done that I would probably not have bought it. But now that I have paid for it I am going to read it.

The website for the book has the data in a pretty disorganized way--why not just make a library? The authors do have a package for arm on the CRAN archive, but it does not install on any OS except Windows (the first R package I have seen with this property in my seven years as an R user). I tried to wget -r the ~gelman/arm/examples directory but ended up with all kinds of other crap in my directory as well, which was annoying. A zip archive could not hurt.

Chapters 1-3

I did not get a huge amount out of these chapters that was deeply interesting, but it is a good intro for newcomers to regression.

The code for the example in chapter 3 doesn't work on non-windows machines. Here is a working version.

Chapter 4

The book becomes more and more exciting from about this point onwards. Only one grouse:

Chapter 4 has some principles doing carrying out regression for prediction (section 4.6) but it is far from clear where they come from and the principles have a cookbookey feel (do this, don't do that, without explaining why). It would have been better if the authors had taught the reader to reason about the problem (surely those are the real principles, and the presented principles the consequences of the thought process generated by those principles).

[to be continued]

Wednesday, January 03, 2007

Null hypotheses, significance testing and all that jazz

Some amazing articles I've recently read in my ample spare time:

1. The Insignificance of Null Hypothesis Significance Testing
Jeff Gill
Political Research Quarterly, Vol. 52, No. 3 (Sep., 1999), pp. 647-674
doi:10.2307/449153

2. Andrew Gelman's article

3. And this one: http://www.npwrc.usgs.gov/resource/methods/statsig/index.htm

4. Bowers and Gelman on Exploratory Data Analysis with Hierarchical Linear Models (AKA Multilevel models)

Suitably stunned into silence, the reader may then have the following practical question: how to present one's HPD intervals in a journal, and what else to present?

Here's an answer from Doug Bates.