The age component of the system is described in (Nguyen et al. The authors apply logistic and linear regression on counts of token unigrams occurring at least 10 times in their corpus.
The paper does not describe the gender component, but the first author has informed us that the accuracy of the gender recognition on the basis of 200 tweets is about 87% (Nguyen, personal communication). (2014) did a crowdsourcing experiment, in which they asked human participants to guess the gender and age on the basis of 20 to 40 tweets. on this, we will still take the biological gender as the gold standard in this paper, as our eventual goal is creating metadata for the Twi NL collection. Experimental Data and Evaluation In this section, we first describe the corpus that we used in our experiments (Section 3.1).
Two other machine learning systems, Linguistic Profiling and Ti MBL, come close to this result, at least when the input is first preprocessed with PCA. Introduction In the Netherlands, we have a rather unique resource in the form of the Twi NL data set: a daily updated collection that probably contains at least 30% of the Dutch public tweet production since 2011 (Tjong Kim Sang and van den Bosch 2013).
However, as any collection that is harvested automatically, its usability is reduced by a lack of reliable metadata.
The resource would become even more useful if we could deduce complete and correct metadata from the various available information sources, such as the provided metadata, user relations, profile photos, and the text of the tweets.
In this paper, we start modestly, by attempting to derive just the gender of the authors 1 automatically, purely on the basis of the content of their tweets, using author profiling techniques.
172 For Tweets in Dutch, we first look at the official user interface for the Twi NL data set, Among other things, it shows gender and age statistics for the users producing the tweets found for user specified searches.
These statistics are derived from the users profile information by way of some heuristics.
The authors do not report the set of slang words, but the non-dictionary words appear to be more related to style than to content, showing that purely linguistic behaviour can contribute information for gender recognition as well.
For our experiment, we selected 600 authors for whom we were able to determine with a high degree of certainty a) that they were human individuals and b) what gender they were.
We then experimented with several author profiling techniques, namely Support Vector Regression (as provided by LIBSVM; (Chang and Lin 2011)), Linguistic Profiling (LP; (van Halteren 2004)), and Ti MBL (Daelemans et al.
For all techniques and features, we ran the same 5-fold cross-validation experiments in order to determine how well they could be used to distinguish between male and female authors of tweets.
In the following sections, we first present some previous work on gender recognition (Section 2). Currently the field is getting an impulse for further development now that vast data sets of user generated data is becoming available. (2012) show that authorship recognition is also possible (to some degree) if the number of candidate authors is as high as 100,000 (as compared to the usually less than ten in traditional studies).