Face To Facebook Press Release

This landed in my inbox this morning and I had such fun reading it (I’d seen the project and cease and desist letter at Transmediale on Sunday), I thought I’d post it here in case it introduces other people to this beautifully ironic hommage to facebook (about which you probably know my feelings).

Press Release, February 10th, 2010. Somewhere in Europe.

* Face to Facebook.
https://face-to-facebook.net Face to Facebook is a project by Paolo Cirio and Alessandro Ludovico, who wrote special software to steal 1 million public profiles from Facebook, filtering them through face-recognition software and posting the resulting 250,000 profiles (categorized by facial expression) on a dating website called Lovely-Faces.com.
The project was launched at Transmediale, the annual festival for art and digital culture in Berlin, on February 2nd, in the form of installation displaying a selection of 1,716 pictures of unaware Facebook users, an explanatory video and a diagram detailing the whole process. The Lovely-Faces.com website went online on the same day.

* The Global Mass Media Hack Performance.
On February 3rd a global media performance started with a few epicenters that after a few days had involved Wired, Fox News, CNN, Msnbc, Time, MSN, Gizmodo, Ars Technica, Yahoo News, WSB Atlanta TV, San Francisco Chronicle, The Globe and Mail, La Prensa, AFP, The Sun, The Daily Mail, The Independent, Spiegel Online, Tagesschau TV News, Sueddeutsche, Der Standard, Liberation, Le Soir, One India News, Bangkok Post, Taipei Times, News24, The Age, Brisbane Times and dozens of others. It was a “perfect news” for the hectic online world: it was about a service used by 500.000.000 users and it potentially affected all of them. Even more importantly, it boosted our inherent fear of not being able to control what we do through our connected screens. Exquisitely put by Time: “you might be signed up for Lovely-Faces.com’s dating services and not even know it.” At the end of the day Cirio’s and Ludovico’s Facebook accounts were disabled and a “cease and desist” letter from Perkins Coie LLP (Facebook lawyers) landed in their inboxes, including a request to give back to Facebook “their data”. We can properly define it as a performance since it happened in a short time span, involved the audience in a trasformation, and evolved into a thrilling story. The frenzied pace of these digital events was almost bearable.

* The Social Experiment.
In the subsequent days the media performance continued at a very fast pace and what we still define as a “social experiment” was actually quite successful. Starting on February 4th the news went spontaneously viral: thousands of tweets and retweets pointed to the Lovely-Faces.com website or to articles and blog posts, often urging people to check if they (and their loved ones) were on the website or not. In a few days Lovely-faces.com received 964.477 page views from 195 different countries. Reactions varied from asking to be removed (which we diligently did) to asking to be included, from anonymous death threats to proposals of commercial partnerships.

* Back to Facebook.
We approached the Electronic Frontier Foundation about legal counsel, but after a second warning by Perkins Coie, we temporarily put up a notice that Lovely-Faces.com is under maintenance. But they are not ok with that.They want Lovely-Faces.com not to be reachable. And they even want the same for Face-to-Facebook.net, the website where we explain the project. So basically their current aim is to completely remove the web presence of this artistic project and social experiment. They missed out on Face-to-Facebook also being meant as a homage to FaceMash, the system Mark Zuckerberg established by scraping the names and photos of fellow classmates off school servers, which was the very first Facebook. Furthermore, it’s a bit funny hearing Facebook complain about the scraping of personal data that are quasi-public and doubtfully owned exclusively by Facebook (as a Stanford Law School Scholar wondered analyzing Lovely-Faces.com). We obtained them through a script that never even logged in their servers, but only very rapidly “viewed” (and recorded) the profiles. Finally, and paradoxically enough, Facebook has blocked us from accessing our Facebook profiles, but all the data we posted in the last years is still there. This proves once
more that they care much more about the data you post than your
online identity.

We’re going to reclaim the access to our Facebook accounts, and the right to express and document our work on our own websites. And even if we are forced to go offline, Lovely-Faces.com will never go offline in the minds of involved people.

Face to Facebook data:

People who asked to be removed from the database: 56
People who asked to be included in the database: 14

Commercial dating website partnership proposals: 4
Other partnership proposals: 9

Cease and desist letters by Perkins Coie LLP (Facebook lawyers): 1
Other threatened lawsuits or class actions: 11

Anonymous email death threat: 5

TV reports: 3
Online news about Lovely-Faces.com (source: Google News): 427

Number of times “lovely faces” introductory video has been viewed on
you tube: 31,089
Unique users on Lovely-Faces.com: 211.714

Face to Facebook links (a few):

Fox news LA (video)
https://www.myfoxla.com/dpp/lifestyle/facebook-profiles-scraped-for-fake-dating-site-20110207

WSBTV 2 (video)
https://www.wsbtv.com/news/26781527/detail.html

Tagesschau (video, in German)

Wired.com
https://www.wired.com/epicenter/2011/02/facebook-dating/

The Age
https://www.theage.com.au/technology/technology-news/facebook-photos-swiped-for-dating-website-20110206-1ailu.html

Stanford Law School / The Center for Internet and Society
https://cyberlaw.stanford.edu/node/6613

Face to Facebook
https://www.face-to-facebook.net/contact.php
— Alessandro Ludovico – Neural – (https://neural.it/)

Posted in Diary | Tagged , | Comments Off on Face To Facebook Press Release

A much better Python script to rename all tracknames in a gpx file with the first trackpoint date

Of course, just as I predicted, Peter beat me to it and seemed to learn as much about Python as me in a few days whereas its taken me a few years to get this far (sigh). Last night, as I was making progress, he posted me a script he’d already made that does the exact same job as mine.

The job is to name all the tracks in a GPX file with the time of the first trackpoint. See the earlier post for reasons why.

But, I finally got my head around parsing an XML (GPX) file with one of Python’s XML modules. The one in question is the etree subpackage in the lxml package (search the Ubuntu repositories for pythonl-lxml) .

#!/usr/bin/env python
#-*- coding:utf-8 -*-
#
# a script to change the track name of each track in a gpx file to the
# date time of the first trackpoint.
# 
# This script is an update of the very stupid renameTracks.py which
# did the same thing but with string functions
#
# TODO: make it take two arguments, one input file, one output
#
# Copyright 2011 Daniel Belasco Rogers
# 
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see
# .

from lxml import etree
import sys, os.path
from optparse import OptionParser

SUFFIX = '_tracknames'

def main():
    usage = "usage: %prog /path/to/gpx/file.gpx"
    parser = OptionParser(usage, version="%prog 0.1")
    (options, args) = parser.parse_args()
    if len(args) != 1:
        parser.error("\nplease define input GPX file")
    filename = args[0]
    
    if not(os.path.isfile(filename)):
        print "input file does not exist"
        exit(1)
    
    newfile1, newfile2 = filename.split('.')
    newfilename = '%s%s.%s' % (newfile1, SUFFIX, newfile2)
    print newfilename
    tree = etree.parse(filename)
    root = tree.getroot()
    # get namespace
    xmlns = root.nsmap[None]
    #print '\nxmlns = %s' % xmlns

    # the following variables are to simplify the searching under
    # root.iter() and item.find() below - there must be a better way than
    # this to not refer to the namespace all the time

    trk =       '{%s}trk' % xmlns
    name =      '{%s}name' % xmlns
    trkpttime = '{%s}trkseg/{%s}trkpt/{%s}time' % (xmlns, xmlns, xmlns)

    trknum=0 # number of trk tags in file
    for element in root.iter(trk):
        trknum += 1 
        print '-'*48
        print 'track %d' % trknum
        old_name = element.find(name)
        print 'old_name: %s' % old_name.text
        new_name = element.find(trkpttime)
        print 'new_name: %s' % new_name.text
        old_name.text = new_name.text
    print '-'*48

    print '\nwriting file %s\n' % newfilename

    writefile = open(newfilename, 'w')
    writefile.write(etree.tostring(tree, encoding="utf-8", xml_declaration=True))
    writefile.close()

    print 'done - script ends\n'

if __name__ == '__main__':
    sys.exit(main())

It took me a while to get my head around how to search forward in the document once you’d found a trk tag, dig down further to find the first trackpoint time which was under trk>trkseg>trkpt and use this text as the trk>name text. Did it in the end with element.find('NAMESPACE/tag')

Posted in Code, Diary, Python | Tagged , , | Comments Off on A much better Python script to rename all tracknames in a gpx file with the first trackpoint date

Using the BT-747 Programme with Dataloggers

For those of us that struggle with the BT-747 programme, I thought I’d just note down some methods here for getting GPX files out of the raw logs that you download from your datalogger.

First of all, if you’re on Linux, remember that you can start the programme the nautilus way or the terminal way:
Nautilus:
Navigate to the directory (folder) you downloaded BT-747 to and look for the file ‘run_j2se.sh’. Double click on it and the dialogue box should come up “Do you want to run “run_j2se.sh”, or display its contents?”. Click on ‘Run’ and wait a few seconds (it is Java, after all). And the gui should pop up.

Terminal (or emacs shell):
cd to the directory you downloaded BT-747 to and run
$ ./run_j2se.sh

The following should pop up:

screenshot of bt-747 programme highlighting raw log file field

Downloading should be pretty easy after ‘connect’ing to the device at the bottom, but then you’re left with this *.bin file named whatever you specified in the ‘Raw Log File’ field (see arrow on image above) The idea is that you keep appending to this file so that it grows and grows but that’s not how I use it and if you want to keep each download separate, I suggest that you rename this Raw Log File each time you download another chunk of data.

So, what to do with this pesky *.bin file – well, I like to save things in GPX which although verbose, is one of the most transferable file formats there is.

Once you’ve made sure you’re pointing to the correct file in ‘Raw Log File’ (the *.bin file you want to convert), let’s consider the ‘Output File Prefix’ by which the programmer of BT-747 means what to call the file before the .gpx or .kml or .csv or whatever format you choose. Make sure you’re not overwriting anything here and make up some naming convention that works. Bear in mind that it seems to append the current date to the end of this ‘prefix’ and before the ‘suffix’ i.e. ‘GPSDump-20100129.gpx’ so watch out for that and don’t double up the date…

screenshot of bt-747 programme highlighting output field

When you’re happy with the output file name, the final step to convert that *.bin in the ‘Raw Log File’ box to a GPX is to click the GPX button under ‘Convert’ (see below)

screenshot of bt-747 programme highlighting GPX convert button

That should be it, the file will be in the same directory as the *.bin file, unless you put some other path in the ‘Output File Prefix’ field like “/gpx/blah-20100129.gpx” In which case you should know what you’re doing and not need this guide.

Happy logging

Posted in Diary | Comments Off on Using the BT-747 Programme with Dataloggers

A Python script to get a list of placenames from latitude and longitude

Here’s a handy adaptation from Nicolas Laurance’s geoname.py script which takes a lat long and returns a list of place names and their code derived from the geonames database.

#!/usr/bin/env python
#-*- coding:utf-8 -*-

# based on geoname.py by Nicolas Laurance (nlaurance@zindep.com)
# This extends his code to perform an extendedFindNearby lookup of a
# lat lon value and produces a list of names from the returned file
#
# NB enter the lat lon values in that order with no comma between If
# you get an AttributeError: Bag instance has no attribute 'geoname',
# it probably just means the server is busy - reload the command and
# do it again
#
# Usage e.g $ python geonameLookup.py 55.751320 11.331710
#
# Daniel Belasco Rogers
# Date: 2010-07-17


from optparse import OptionParser
from time import sleep
from xml.dom import minidom
import sys, urllib, re

HTTP_PROXY = None
DEBUG = 0

def getProxy(http_proxy = None):
    """get HTTP proxy"""
    return http_proxy or HTTP_PROXY

def getProxies(http_proxy = None):
    http_proxy = getProxy(http_proxy)
    if http_proxy:
        proxies = {"http": http_proxy}
    else:
        proxies = None
    return proxies

class Bag: pass

_intFields = ('totalResultsCount')
_dateFields = ()
_listFields = ('code','geoname','country',)
_floatFields = ('lat','lng','distance')

def unmarshal(element):
    #import pdb;pdb.set_trace()
    rc = Bag()
    childElements = [e for e in element.childNodes if isinstance(e, minidom.Element)]
    if childElements:
        for child in childElements:
            key = child.tagName
            if hasattr(rc, key):
                if key in _listFields:
                    setattr(rc, key, getattr(rc, key) + [unmarshal(child)])
            elif isinstance(child, minidom.Element) and (child.tagName in ( )):
                rc = unmarshal(child)
            elif key in _listFields:
                setattr(rc, key, [unmarshal(child)])
            else:
                setattr(rc, key, unmarshal(child))
    else:
        rc = "".join([e.data for e in element.childNodes if isinstance(e, minidom.Text)])
        if str(element.tagName) in _intFields:
            rc = int(rc)
            if DEBUG: print '%s : %s' % (element.tagName,rc)
        elif str(element.tagName) in _floatFields:
            rc = float(rc)
            if DEBUG: print '%s : %s' % (element.tagName,rc)
        elif str(element.tagName) in _dateFields:
            year, month, day, hour, minute, second = re.search(r'(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})', rc).groups()
            rc = (int(year), int(month), int(day), int(hour), int(minute), int(second), 0, 0, 0)
            if DEBUG: print '%s : %s' % (element.tagName,rc)
    return rc

def _do(url, http_proxy):
    proxies = getProxies(http_proxy)
    u = urllib.FancyURLopener(proxies)
    usock = u.open(url)
    rawdata = usock.read()
    if DEBUG: print rawdata
    xmldoc = minidom.parseString(rawdata)
    usock.close()
    data = unmarshal(xmldoc)
#    if hasattr(data, 'ErrorMsg'):
    if 0:
        raise TechnoratiError, data
    else:
        return data

def _buildextendedFindNearby(lat,lng):
    searchUrl = "https://ws.geonames.org/extendedFindNearby?lat=%(lat)s&lng=%(lng)s" % vars()
    return searchUrl

def extendedFindNearby(lat,lng, http_proxy=None):
    """
   
    """
    url = _buildextendedFindNearby(lat,lng)
    if DEBUG: print url
    return _do(url,http_proxy).geonames

def main():
    usage = "usage: %prog 'latitude' 'longitude'"
    parser = OptionParser(usage, version="%prog 0.1")
    (options, args) = parser.parse_args()
    if len(args) != 2:
        parser.error("\nplease enter the latitude and longitude of the point you want to look up")
    lat, lng = args
    print 'latitude:%s, longitude:%s' % (lat, lng)
    place = extendedFindNearby(lat, lng)
    for b in place.geoname:
        print '%s\t%s' % (b.name, b.fcode)

if __name__ == '__main__':
    sys.exit(main())
Posted in Code, GPS, Linux, Python, Software | Tagged , , | 1 Comment

Please sign this if you care about cultural production in Berlin

This excellent site pretty much explains itself in English and German – about the laughable attempt by the mayor of Berlin and cronies to get as much as they can out of cultural activity in this city while at the same time making absolutely no effort to support the cultural producers.

https://www.bbk-berlin.de/con/bbk/front_content.php?idart=826

Posted in Diary | Tagged , , , | Comments Off on Please sign this if you care about cultural production in Berlin