Old French fabliaux in modern English translations

Fabliaux are short, comic tales in verse written in Old French from roughly 1150 to 1350.  Modern English translations of fabliaux exist in a variety of books.  The online Archives de littérature du Moyen Âge (ARLIMA) provides much bibliographic information about fabliaux, but the information is not structured as a dataset.  To aid further study of fabliaux, I’ve created a fabliaux bibliographic dataset for modern English translations of fabliaux.

You can use the fabliaux bibliographic dataset as an index to modern English translations.  Given an Old French title for a fabliau, the fabliaux translations index indicates page numbers of modern English translations of that fabliau in published collections of fabliaux in English translation.  For example, Dubin’s recent book, The Fabliaux, contains English translations of sixty-nine fabliaux.  But it doesn’t have a fabliaux index.  If you want to find Dubin’s translation of La Dame Escoilliee, you have to scan through his book’s table of contents.  With the fabliaux bibliographic dataset, you can easily look up the title in an alphabetical title index by the first key word (“dame” for this title) and see the page number in Dubin’s collection.[1]  Moreover, the same line also indicates page numbers for other modern English translations of that fabliau in other published collections.

You can also use the fabliaux bibliographic dataset critically.  It shows collections of fabliaux translations in relation to each other.  It also shows those collections in relation to the overall corpus of fabliaux.  Fabliaux chosen for translation and their ordering in a collection of translated fabliaux provide insight into translators’ perspectives on fabliaux and interest in reading fabliaux.

Across the seven major collections of fabliaux in English translation, four fabliaux were translated five times each.  These most popular fabliaux in translation are:

  • Du bouchier d’Abevile
  • De Brunain la vache au prestre
  • La borgoise d’Orliens
  • Berengier au lonc cul [2]

All four fabliaux depict men being robbed, beaten, and/or humiliatedBerengier au lonc cul tells of a man being coerced into kissing the ass of a woman.  The fabliau La Gageure seems to respond to such depictions of ass-kissingLa Gageure is not presented in any of the seven major collections of fabliaux in English translation.

The first fabliaux in Dubin’s collection is Du con qui fu fait a la besche.  Only one other fabliaux collection has an English translation of Du con qui fu fait a la besche.  That other collection is a scholarly work clearly intended for academic readers.  Dubin’s work, in contrast, has much more popular packaging.  Publishing Du con qui fu fait a la besche first in Dubin’s collection suggests that this fabliau has become much more attractive.[3]

The leading appearance of Du con qui fu fait a la besche in Dubin’s collection may indicate the growing importance of the literature of men’s sexed protests.  Du con qui fu fait a la besche is a rather crude reworking of the Genesis account of the creation of woman.  In contrast to many other fabliaux, this fabliau presents a false image of domestic violence (that false image has become a stereotype today).  More importantly, this fabliau describes women’s propensity to talk.  That’s a common observation in the literature of men’s sexed protests.  Propensity to talk seems to be associated with women’s social superiority.  In addition, Du con qui fu fait a la besche concludes with these observations:

they’ve {women have} destroyed many good men,
who’ve come to grief and been disgraced
and lost what wealth they once possessed. [4]

These concluding observations echo down through the ages to the persecution of Charlie Chaplin and the imprisonment of men impoverished and unable to pay child-support debts.  Medieval scholars, with their new embrace of transgressive works, seem to be crossing boundaries, interrogating and unsettling discourse, and problematizing naturalized social injustices.

Update: I’ve added additional sources to the fabliaux bibliographic dataset.  They change slightly the statistics cited above.

*  *  *  *  *

fabliaux bibliographic dataset (also available as an Excel workbook)

Read more:

Notes:

[1] The Old French fabliaux title list in the fabliaux bibliographic dataset is from the ARLIMA list, augmented with the Old French titles of any additional tales included in collections of fabliaux in English translation.  ARLIMA’s individual entries for fabliaux show in some instances multiple Old French titles for a given fabliaux.  In most cases, variant titles have common key words.  If you can’t find the title of an Old French fabliaux in the fabliaux bibliographic dataset, you should consider variant titles.  Consider also whether Nouveau recueil complet des fabliaux doesn’t categorize that particular tale as a fabliaux.  Various authorities count from 130 to 160 tales as fabliaux.  The fabliaux corpus in the fabliaux bibliographic dataset currently consists of 136 fablaiux.

[2] See translation frequencies in the fabliaux bibliographic dataset.  I define a major collection as a work that contains five or more fabliaux in modern English translation.  For bibliographic citations for these collections, see source identifiers.

[3] Dubin’s collection is based on the Old French texts in Willem Noomen’s Nouveau recueil complet des fabliaux.   The other collection that includes Du con qui fu fait a la besche is Raymond Eichmann and John DuVal’s The French fabliau: B.N. Ms. 837.  Its title indicates well its academic orientation.  The French fabliau: B.N. Ms. 837 follows the fabliaux ordering adopted in Nouveau recueil complet des fabliaux.  Dubin’s collection has a new ordering.  That ordering may be based on an approach like that for making textual choices:

Noomen seeks to produce as close a text as possible to that of the original author; I looked for the version that pleased me most.

Dubin (2013) “Translator’s Note,” p. xxx.  Dubin’s collection oddly doesn’t include Le lai d’Aristote.  That tale is included in four of the seven major collections of fabliaux in English translation.

[4] Dubin (2013) p. 9.  Dubin comments:

there are limits to {the fabliaux’s} daring.  When we consider what is missing, for example, even the sexually explicit fabliaux strike us as almost prudish by today’s standards.  The man is invariably on top but for one comic exception, oral sex is entirely absent, and the rare instance of same-sex relations are all misunderstandings.  Their moral stance is thus at once conservative and rebellious.

These comments are somewhat misleading.  Anne Ladd’s pioneering and scarcely known statistical analysis of fabliaux indicates that fabliaux depict women winning in over 50% of male-female conflicts.  Cited in Johnson (1983) p. 298.  The absence of oral sex in the fabliaux isn’t a major weakness.  Other medieval literature presents men’s protests against oral sex that doesn’t fully meet many men’s needs.  With enlightened reading, fabliaux present major challenges to current academic conservative morality.

References:

Dubin, Nathaniel. 2013. The fabliaux. New York: Liveright.

Johnson, Lesley.  1983. “Women on Top: Antifeminism in the Fabliaux?” The Modern Language Review. 78 (2): 298-307.

Tagged:

big data and controlled experiments

Big data is currently a top geek fashion.  But more data does not directly imply better understanding.  Avinash Kaushik offers great wisdom on the mistakes of data puking.  Collecting data and making tables and diagrams is no substitute for figuring out what to do.

Another top geek fashion is A/B testing.  Kohavi et al. (2012) summarize the pitfalls with a proverb: “the difference between theory and practice is greater in practice than in theory.”  The allure of controlled experiments is that you can directly test outcomes.  In practice you need to address two big problems:

  1. Actually understanding the test.  Kohavi et al. (2012) point to the importance of A/A testing.  If you can’t understand and control the outcomes of A/A testing, don’t waste your time doing A/B testing.   A test implicitly encompasses judgments about sample relevance and time-horizon relevance.  A test also necessarily implies some anticipated change in behavior.  Have you considered possibilities for changes in behavior apart from and unrelated to your proposed test change?
  2. Actually identifying a feasible and meaningful outcome measure.   Optimizing what is measurable may not increase economic value.  Figuring out the best feasible measure of the economic value relevant to a specific enterprise often isn’t easy.  Kohavi et al. (2012) provide some insightful examples of trade-offs between short-run value and long-run value in the context of testing search advertising.

In short, big data and controlled experiments are tools for thinking, not substitutes for thinking.

*  *  *  *  *

Related post:

Reference:

Kohavi, Ron, Alex Deng, Brian Frasca, Roger Longbotham, Toby Walker, Ya Xu. 2012.  “Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained.”  To appear in KDD 2012 Aug 12-16, 2012, Beijing China.

Tagged:

building a data search engine

Suppose you want to find data relating to a specific topic.  Software can easily distinguish between alphabetical text and numbers.  Software can also easily recognize tabular formats in HTML and plain text.  Moreover, the extent to which documents contain numbers and tables strongly distinguishes among documents.  “Look for data” seems like a significant search qualifier that wouldn’t be costly to implement.  So where are the data search tools?

Zanran is search engine designed to find data.  The only way to tell Google that you want to search for data is to specify a search for .xls (Microsoft Excel) documents.  That’s a poor specification for a data search, especially for a company that’s not Microsoft.  With Zanran, you specify that you want to search for data simply by using Zanran for the search.  Zanran returns documents containing data.  Unlike Datafiniti, Zanran doesn’t attempt to extract the data into a tabular form.  Unlike WolframAlpha, Zanran doesn’t attempt to do calculations with data that it finds.  Zanran finds the relevant documents.  You extract the data and make the specific tables or calculations that you want.  At least until the forthcoming world-wide implementation of linked data, Zanran’s approach is the most cost-effective division of tasks for using data from the whole web of various document types.

A stand-alone data search engine unfortunately doesn’t seem economically propitious.  Building a data search engine involves all of the challenges of building a general search engine.[*]  Search for data is mainly a tuning of the relevance-ranking algorithm.  Even without such a tuning, a Google search for historical advertising expenditure leads to better data than a similar Zanran search.  Moreover, searches for data don’t provide context for lucrative advertising.  Persons searching for data aren’t looking to buy mass-market consumer products.  Perhaps Zanran or similar services can succeed on a subscription or pay-per-search basis.   For the sake of fact-based policy and business analysis, let’s hope so.

*  *  *  *  *

Related posts:

[*] Zanran analyzes images to determine if they contain a graph, chart, or table.  Thus a Zanran search can potentially return a desired graph or data that a purely textual search would miss.  That’s valuable, but searching images for numerical data and data presentations seems to me to be a rather small share of the overall value of data search to users.

Tagged:

describing statistics

Verbally describing statistics accurately and succinctly is often difficult.  Needle and Thread offers a better way.  In a recent paper on Thread, Glenn McDonald, the Thread designer, stated:

The Thread query describes a path through the data … we believe that it is more expressive, easier to map to the shapes of human inquiry, and far more compact and data-focused. These may be enough to give the query language a qualitatively different role: a tool not just for people to talk to machines about data, but for people to talk to themselves and each other about data in such a way that machines can participate in, and inform, the discussions.

With Needle and Thread, you can understand statistics by seeing how they’re calculated and experimenting with alternatives.

badly structured tables have a bright future

Which is a better, one big table, or two or more smaller tables?  The organization of the data sources, the number of smaller tables, the extent of the relationships between the smaller tables, and economies in table processing all affect the balance of advantage.  But cheaper storage, cheaper computing power, and fancier data tools probably favor the unified table.  At the limit of costless storage, costless processing, and tools that make huge masses of data transparent, you can handle a component of the data as easily as you can handle all the data.  Hence in those circumstances, using one big table is the dominant strategy.[*]

Unified tables are likely to be badly structured from a traditional data modeling perspective.  With n disjoint components, the unified table has the form of a diagonal matrix of tables, where the diagonal elements are the disjoint components and the off-diagonal elements are empty matrices.  It’s a huge waste of space.  But for the magnitudes of data that humans generate and curate by hand, storage costs are so small as to be irrelevant.   Organization, in contrast, is always a burden to action.  The simpler the organization, the greater the possibilities for decentralized, easily initiated action.

Consider collecting data from company reports to investors.  Such data appear within text of reports, in tables embedded within text, and (sometimes) in spreadsheet files posted with presentations.  Here are some textual data from AT&T’s 3Q 2010 report:

More than 8 million postpaid integrated devices were activated in the third quarter, the most quarterly activations ever. More than 80 percent of postpaid sales were integrated devices.

These data don’t have a nice, regular, tabular form.  If you combine that data with data from the accompanying spreadsheets, the resulting table isn’t pretty.  It gets even more badly structured when you add human-generated data from additional companies.

Humans typically generate idiosyncratic data presentations.  More powerful data tools allow persons to create a greater number and variety of idiosyncratic data presentations from well-structured, well-defined datasets.   One might hope that norms of credibility evolve to encourage data presenters to release the underlying, machine-queryable dataset along with the idiosyncratic human-generated presentation.  But you can think of many reasons why that often won’t happen.

Broadly collecting and organizing human-generated data tends to produce badly structured tables.  No two persons generate exactly the same categories and items of data.  Data persons present change over time.   The result is a wide variety of small data items and tables. Combining that data into one badly structured table makes for more efficient querying and analysis.   As painful as this situation might be for thoughtful data modelers, badly structured tables have a bright future.

*  *  *  *  *

[*] Of course the real world is finite.  A method with marginal cost that increases linearly with job size pushes against a finite world much sooner than a method with constant marginal cost.   The above thought experiment is meant to offer insight, not a proof of a real-world universal law.

Tagged:

exploring and remodeling table fields

Sometimes tables are messy not just in their data items, but also in the fields that define the table columns.[1]  Various techniques help to deal with such “second order” messiness.  Sorting table fields alphabetically or evaluating them with more powerful text similarity measures help to identify inadvertently duplicated fields.   Sorting table fields by the number of non-null items in the fields would tell you the relative data size of different fields.  Of course,  you would also like to be able to merge and split fields just as easily as you select fields to include in a sub-table of interest.

Transforming a table into row number/field/item triples can be helpful for exploring and remodeling table fields.  With such a transformation, fields become new data items.  Excluding triples with null items and faceting the fields shows the count of items in each field.  Merging and splitting fields become operations of selecting relevant triples and renaming the field/data item.   When finished such meta-cleaning, you might want to transform the triples back into a remodeled table.  Since field merges can create a field containing more than one item, the remodeled table ideally would support multiple items per cell. [2]

The triple transform is an instance of more general table reshaping operations.  A long transformation replaces a set of fields with a key field containing those field names and a new field containing the keyed values.  A wide transformation reverses the long transformation.  It expands a key field into separate key-instance fields populated with the corresponding keyed values.  Non-varying fields are expanded or contracted as necessary.  Table reshaping operations change both the column and row dimensions of a table.  The reshape command in Stata implements this class of table operations.  The Stata reshape documentation shows examples of these transformations.   SDDL / STT provides table reshaping using spreadsheet-based markup.

The awesome new data tool Google Refine can do table reshaping.  Here’s the Google Refine procedure for transforming a table into row/field/item triples:

  1. Create a row-number data column called “row” at the beginning of your table, if you don’t already have such a column.  That column will be identical to the (non-editable) row number column on the Google Refine table presentation frame.
  2. From the column drop-down menu in the first column after the row number column, select  “Transpose” / “Cells across columns into rows…”  from the column drop down menu.  Select the field below the row field in the “from column” and the last field in the “to column”. Call the column name “field”, check “prepend the column name”, and separate the column name and cell value with “:” (or some other value not included in item text).  Then click “Transpose”.  The table is then elongated, but it needs some further modifications.
  3. From the “row” column drop-down menu, select “Edit cells” / “Fill down”.  That fills in the row numbers for all  the records.
  4. In the field column, select “Edit column” / “Split into several columns…” , and enter the separator (“:”) used in step 2. Clicking ok completes transforming the table into triples.

To transform the triples back into the wide table:

  1. In the drop-down menu for the rightmost column (field2), select “Transpose” / “Cells across columns into rows”.  For the number of rows to be transpose, enter the number n of fields in the original table (not including the row number field).
  2. Rename the new “field 2 n” fields based on the field names in the previous column (field 1).
  3. In any of the former “field 2 n” fields, select from the drop-down menu “Facet” / “Text facet”.  In the left panel select “(blank)”.   You should now see all the blank “field 2 n” rows.
  4. In the drop-down menu for “all”, select “Edit rows” / “Remove all matching rows”.
  5. Click on “Reset All” and the table is back to its original form.[3]

This triple transform unfortunately isn’t as helpful as it could be.  With this triple transform, you can then facet the (former) table fields and understand what fields exist in the table.  But any changes that you make just in the field column won’t have any effect in the reverse transform.   A function that would implement the reverse transform with changes in the field column (and automatically create the corresponding field names) seems like it would be straightforward to implement.[4]   A robust reverse triple transform would make a great data tool even more useful.[5]

*  *  *  *  *

Notes:

[1] Throughout this post, table means a data collection with one dimension of categorization (a record list), e.g. a typical relational database table that organizes fields  into records.

[2] If it doesn’t, the table cleaner has to ensure before a field merge that no multi-item cells will be created.

[3] Similar steps can implement table reshaping on just a subset of columns.

[4] The possibility of having multiple items in one row-field cell would have to be addressed.  One solution would be to ask for a delimiter before the transform, and use that delimiter to form multi-item cells.

[5] Collecting human-generated unstructured and structured data using SDDL / STT tends to result in messy, wide, sparse tables.

Tagged:

describing and organizing spreadsheet data

Even in this age of big data, most persons collect data in spreadsheets.  Two challenges are common with spreadsheet data, particularly spreadsheet data collected from a variety of sources.  First, you need to understand what numbers you have.  That means both the definition of a specific number and the presence or absence of particular numbers.  Second, you need to combine unstructured data and data tables of various forms into an encompassing data structure that you can flexibly query and re-organize.  Here are some ideas and data tools to address these challenges.

Suppose you are collecting unstructured data from a variety of sources. For example, a Wall Street Journal news article states:

Nielsen, at the request of The Wall Street Journal, analyzed cellphone bills of 60,000 mobile subscribers and found adults made and received an average of 188 mobile phone calls a month in the 2010 period, down 25% from the same period three years earlier.

To collect that data, you might paste the string “adults made and received an average of 188 mobile phone calls a month in the 2010 period” in a spreadsheet cell.  But that’s data you can’t use in spreadsheet calculations.  Ok, stick the number 188 in the cell to the right of that data string so you can do calculations on the relevant number.  Even better, stick the number in the first cell and the description string in the second cell, so that the long, left-justified description string lines up close to the corresponding number.  If you’re really serious, stick the source url in another column to the right so that you can check if you’ve missed something.  This sort of procedure gives you a spreadsheet that looks something like this.   This data format may be good enough for some private-sector work, or even some government work.  If so, fine.

But collecting numbers and ad hoc data description strings has some weaknesses.  In the example above, you haven’t fully captured the data description.  That data concerns “mobile subscribers.”   Mobile subscribers may or may not be different from mobile prepaid service users.  Moreover, suppose you had a hundred numbers like that above.  To find a particular statistic of interest, you would have to start reading through the description strings until you found one similar to the statistic that you sought.   That’s tedious and time-consuming.

If you describe spreadsheet data in a more structured way, you can understand it better and process effectively larger amounts of it.   A common approach would be to set up a spreadsheet table and stick numbers into it.   For example, suppose you want to collect wireless service data by company.   You place in a row your data categories (fields): company, customer type, number of customers 3Q 2010, etc.  Then you start putting data into the relevant columns below.  If you have a lot of categories and the data you find isn’t generally organized in the order of your categories, populating the table will be a tedious and time-consuming task.   Moreover, suppose that you are collecting the data by company.  You have to repeatedly enter the company name in the company column.  That sort of annoyance can easily be multiplied if you are collecting data with additional categorical organizations such as date and business segments.  If these slowly varying categories are spread across a large table, the pain is even more intense.[1]

Here’s some good news: that supercomputer on your desk can work as a tabulating machine. You just need to describe your data with the Spreadsheet Data Description Language (SDDL) and then tabulate it with STT (Spreadsheet Tabulating Tool, or Simple Tabulating Tool, or SDDL Tabulating Tool).   SDDL and STT provide fine new acronyms for you to use without requiring you to learn much or do much different with the beloved spreadsheet programs that you’ve been using since Visicalc came out.  To use SDDL, you put a category in one cell, and in the next cell to the right, an item for that category.  You can add additional categories and items aligned below, in any order most convenient.  When you stick an item into a category that already contains an item, you’ve implicitly tabulated a record and started a new record with empty categories.  STT is a spreadsheet macro that adds to a table on a separate sheet the categories (if necessary) and items contained in a block of SDDL.  SDDL is so simple that you’ve probably been setting up a lot of spreadsheet data in SDDL, without even knowing that you were using SDDL.

Here are some SDDL / STT examples using Google Docs spreadsheets.[2]  So that you can see the SDDL and the tabulation side-by-side, I’ve copied the tabulations from the tabulated sheet to the SDDL examples sheet.[3]  Examples 1.1-1.2 show placing items into categories, and how sticking an item into an occupied category generates a tabulated record.   This technique is more efficient than hand-tabulating data for a wide, sparse table.

You can easily combine hand tabulating and SDDL / STT.  If you start with a tabulation, running STT on a block of SDDL adds any needed categories and adds all items to their categories.  See Example 2 in the SDDL example spreadsheet.

SDDL uses prefixes for pinning and unpinning items in categories.  If you precede a category name with the character “*”, the associated item is pinned in that category across records.   So if you are working on tabulating a group of records for a company, you can pin the company name in the company category for that record group.  That company name will then populate each record in that record group.  When you insert a new name in the company category, you decide whether to pin that new name.   See Example 3.

You can create SDDL using all the editing, linking, and calculating capabilities of the spreadsheet.  So, for example, if you have a figure somewhere in a spreadsheet, you can reference that figure and do a calculation using it to produce a figure in an SDDL cell.  Similarly you can copy and paste categories to lessen the work of entering SDDL.

SDDL stack and twoway formats provide an easy way to create groups of similar records.  A stack is a table of items with one dimension of categories (a record list).  A twoway format describes a table with two dimensions of categories.  Both stacks and twoways are commonly called tables, but technically these forms differ significantly.

In SDDL, a stack is one or more rows in a block where more than one item follows the category that starts each row.  The row-starting categories, called 1cats, must not be duplicated within one stack.[5]   STT tabulates one record for each stack column.[4]  Each tabulated record contains all the items sitting in the current record before the stack occurred.   To visually distinguish stacks, and for some minor cases of missing values, the stack category can be prefixed with a 1.  See Examples 5.1 and 5.2.

While a stack implies a record for each data column, the twoway format typically creates a record for each item placed in a rectangular grid.  The twoway format begins with the twoway directive “%twoway”, with “precat” categories following in that directive’s row.  Below the twoway directive row are a stack of “1cat” rows and a set of “2cat” rows.   The record for grid item at position (r,c) is created from items in the “precat” section of row r, the items in the stack above the grid in column c, and the grid item.  The grid item goes into the “2cat” category that begins the grid row.   See example 6.   Once you understand how to read the relevant categories (precats, 1cats, and 2cats), a twoway SDDL form is just a well-described, two-way table.[6]

A key use for the twoway is to re-organize and aggregate various data tables.  Financial publications typically include pivot tables (two-way tables) constructed from a master data table (a one-way table / record list).  The SDDL twoway format allows you to reverse pivot the table to the extent possible (unfortunately, you can’t un-sum a cell total).    For an example of an SDDL table-to-list conversion using real data, see the first data table here (from an AT&T 3Q2010 financial spreadsheet) and the first SDDL twoway here.   A Google search shows that many others struggle to convert (two-way) tables to (one-way) record lists.  With SDDL / STT, you can do that job easily and flexibly.   Moreover, you can automatically aggregate the tables that you convert into one, big master tabulation.

Go ahead and experiment with SDDL / STT and consider whether it might be useful to you.   Here’s an example of using SDDL / STT to aggregate both unstructured and structured data from AT&T’s and Verizon’s  third-quarter, 2010, financial reports (wireless segment).  Here’s a full definition of SDDL and the source code for the STT script.  Want to customize the script to your liking?  Got an idea for improving it?  Go for it!   If you make an improvement to the script, please make it available to everyone.  That’s not a legal requirement; it’s just a polite request.

Only a few persons have big data.  Ultimately, it’s not the size of your data, but what you do with it that really matters.

*  *  *  *  *

Notes:

[1] Tricks exist to fill down blank cells.  But you still have to locate the right columns for the first-changed items and filling down.

[2] STT is currently freely available as a Google Apps Script.   I hope that public-spirited programmers will port it to Microsoft Excel, OpenOffice, Zoho Spreadsheets, and any other applications where it would be useful, and that they will make the resulting ports freely available.

[3] The tabulations appear in a separate sheet from the SDDL.  That sheet by default is named “tabulated”.

[4] My design philosophy for STT’s processing of SDDL is “STT will do the best it can with what it’s given”.  If you duplicate 1cats in a stack, STT will break up the stack into sub-stacks to overcome the duplication.  The STT log file records such situations.

[5] An SDDL stack is the same as the transpose of a record list, where the first row contains field names.  The SDDL directive “gettab” adds a record list to an STT tabulation (which itself is a record list).

[6] The twoway can handle different categories placed as 2cats.  See the SDDL / STT documentation.

Tagged: