The University of Bristol Email Charter I really like the University of Bristol’s new email charter. But, I think it could be made a little more clear and actionable, especially since it can build off of previous work and thought about making emails clearer, shorter, and fewer in number. Below, I’ll go through the points listed in the original charter and try to make them clearer. I’ll also try to add some actions that add some teeth to the points.
I’m really excited to announce that my longstanding package to work the with US Census Bureau API, cenpy, has gotten some long-needed love and attention. The new method of working with the data is really slick (if I do say so myself).
Check it out in this gist! If you like it, hate it, or want to improve it, hit me up over on the project or on my twitter.
One thing I find so difficult to accept is crunch time. Not that I can’t cope or even succeed in crunch, per se. But, rather, I’m beginning to find it very peculiar that folks respond to difficult times through the faith in their ephemerality… as if
this specific difficult phase is going to pass
is somehow heartening.
Sometimes, when my job is at its worst, I hear this a lot.
When comparing a multilevel model to a fixed-level model, it’s important to consider how things are parameterized. For instance, let’s say you’re conducting comparisons between a no-pooling model and a partial pooling variance components model. In this case, we have:
$$ y= \Delta u + \epsilon$$
as the specification, where $\Delta$ is the dummy variable matrix, $y$ is the outcome of interest, and $\epsilon$ is the usual homoeskedastic error term for the responses.
The Geometer’s Angle What John O’Loughlin talks about in his recent presidential address in Political Geography strikes me as substantially similar to many of the things I’ve read from him on the field. Indeed, it reminds me of the same track I got from him as a PhD applicant seeking to work in quantitative political geography. I’ll never forget; right at the time of (what I thought and still feel is) great ground-breaking work in political science focusing directly on the new understandings possible from aggregate electoral data [1,2,3] he suggested that electoral analysis needed to go beyond this.
To help subpackages that depend on libpysal, the API will change shortly to be in line with our desired migrating.pysal.org API.
If you test your package against the pypi version of libpysal (which you get using pip install libpysal), you already know that your changes don’t break with respect to what exists. However, if you’d like to give yourself some lead time to detect if there are breaking changes in the development version of libpysal on github, feel free to follow these directions on how to set up optional tests on travis.
I’ve always been interested in how PySAL stacks up with NetworkX for building the dual graph of polygonal lattices, so I did some perftesting. This is by far the most common spatial operation I do on a day to day basis, and it looks like PySAL’s constructors still are the fastest for cases where we can assume planarity. But, with how simple the geopandas-only solution is, I look foward to the day when the geopandas.
I just finished attending the 2018 GIS Research UK conference at Leicester University.
I presented twice; once on some new methods in spatial clustering and once for the CDRC brexit data analysis competition. I had a really good time participating in the data analysis competition, and it struck a chord with me reflecting on quite a few conversations I’ve had with my friend & colleague Taylor Oshan and something Morton O’Kelly said at this year’s annual American Association of Geographers meeting.
My entry in the Consumer Data Research Center’s Brexit Data Competition is called “Tension Points: A Theory & Evidence” (static), which I talked about at the 2018 GISRUK conference
There is an abstract describing some of the work that I submitted to get to the final round, but if you’re computationally inclined, you’ll find everything sufficient to replicate my modelling & analysis in this Jupyter Notebook (raw). You’ll need scikit-learn, pystan, statsmodels, and geopandas at minimum to run.
This paper culminates a bit of work I’ve started on since seeing a talk by Phil Chodrow on a paper that eventually became his quite interesting NAS paper paper on segregation and entropy surfaces.
I was intrigued by the prospect of using spectral clustering for constrained clustering problems. Specifically, I’d known that affinity matrix clustering could be adapted to constrained contexts ever since reading about hierarchical ward clustering, but I hadn’t seen a really convincing method that showed me how I could work this out for a general affinity-matrix clustering method.