Adapted from J. Drucker's DH 201 Syllabus
WHAT IS DH?
Digital Humanities is an umbrella term for a wide array of practices for creating, applying, and interpreting new digital and information technologies. These practices are not limited to conventional humanities departments, but affect every humanistic field at the university, including history, anthropology, arts and architecture, information studies, film and media studies, archaeology, geography, and the social sciences. At the same time, Digital Humanities is a natural outgrowth and expansion of the traditional scope of the Humanities, not a replacement or rejection of humanistic inquiry. In fact, the role of the humanist is critical at this historic moment, as our cultural legacy migrates to digital formats and our relation to knowledge, cultural material, technology, and society is radically re-conceptualized. "Promise of Digital Humanities"
"Manifesto for the Digital Humanities"
"Digital Humanities Manifesto 2.0″
HISTORY OF DH
- Vannevar Bush, "As We May Think” (1945).
- Theodor Nelson, "A File Structure for the Complex, the Changing, and the Indeterminate”(1965), from: The New Media Reader, eds. Noah-Wardrip Fruin and Nick Monfort
- Tim Berners-Lee (1996), "The World Wide Web: Past, Present, and Future”
- Burdick, Drucker, Lunenfeld, Presner, and Schnapp, Digital_Humanities (Chapter 1)
- Susan Hockney, "The History of Humanities Computing” in: A Companion to Digital Humanities
- "Part I: Defining the digital humanities” in Gold, Debates in the Digital Humanities.
GENRES AND INFRASTRUCTURE
- Burdick, Drucker, Lunenfeld, Presner, and Schnapp, Digital_Humanities. Pp. 27-58, 73-98. (Chapters 2-3. Skip "A Portfolio of Case Studies"). [link to open access book]
- Svensson, Patrik. 2010. "The Landscape of Digital Humanities” 4 (1). Paragraph 33-179.
- Börner, Katy. 2011. "Plug-and-Play Macroscopes.” Communications of the ACM 54 (3): 60-69.
- Schulz, Kathryn. 2011. "The Mechanic Muse - What Is Distant Reading?” The New York Times, June 24, sec. Books / Sunday Book Review.
CRITICAL THEORY AND DH
- Todd Presner, "Digital Humanities 2.0: A Report on Knowledge”
- Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge
- Tara McPherson, "Why are the Digital Humanities so White?” in Gold: Debates in the Digital Humanities
- Elizabeth Losh, "Hacktivism and the Humanities” in Gold: Debates in the Digital Humanities
- Alan Liu, "Where is Cultural Criticism in the Digital Humanities?” in Gold: Debates in the Digital Humanities
Real Face of White Australia
CRITICAL BIBLIOGRAPHY AND DH
- Peter Stoicheff and Andrew Taylor, "Introduction: Architectures, Ideologies, and Materials of the Page [LINK TO pp 1-15]” [Blurry pdf of entire chapter]
- Jerome McGann, "Visible and Invisible Books: Hermetic Images in N-Dimensional Space”
- Kenneth Price, "Edition, Project, Database, Archive, Thematic Research Collection: What's in a Name?”
Walt Whitman Archive: http://www.whitmanarchive.org/
The Rossetti Archive (http://www.rossettiarchive.org/);
NINES (Nineteenth Century Scholarship Online): http://www.nines.org/;
Women Writers Project: http://www.wwp.brown.edu/;
Learn about the Text-Encoding-Initiative (TEI): http://www.tei-c.org/index.xml
- Jo Guldi. What is the Spatial Turn?
- Ed Ayers. Turning toward Place, Space, and Time. & David Bodenhamer. The Potential of Spatial Humanities.
- Peter Bol. What do Humanists Want?]- Excerpts from Presner, Shepard, Kawano. HyperCities.
HyperCities [Click on "Launch HyperCities". Use Firefox for your browser]
VISUALIZATION AND REPRESENTATION
- McCarty, Willard. 2004. "Modeling: A Study in Words and Meanings.” In A Companion to Digital Humanities, edited by Susan Schreibman, Raymond George Siemens, and John Unsworth, 254-70. Blackwell Companions to Literature and Culture. Malden, Mass: Blackwell Pub.
- Drucker, Johanna, and Bethany Nowviskie. 2004. "Speculative Computing: Aesthetic Provocations in Humanities Computing.” In A Companion to Digital Humanities, edited by Susan Schreibman, Raymond George Siemens, and John Unsworth, 431-47. Blackwell Companions to Literature and Culture. Malden, Mass: Blackwell Pub.
- Snyder, L. Virtual Reality for Humanities Scholarship.
- Johanson, Christopher. 2009. "Visualizing History: Modeling in the Eternal City.” Visual Resources: An International Journal of Documentation 25 (4): 403. doi:10.1080/01973760903331924.- Favro, Diane. 2012. "Se Non È Vero, È Ben Trovato (If Not True, It Is Well Conceived): Digital Immersive Reconstructions of Historical Environments.” Journal of the Society of Architectural Historians 71 (3): 273-77.
- Johanna Drucker, Humanities Approaches to Graphical Display, Digital Humanities Quarterly (Winter 2011).
- Introductory articles on "culturomics” from Science, "Quantitative Analysis of Culture Using Millions of Digitized Books” (2010).
- Matthew Jockers, Macroanalysis. [Excerpts: Metadata]
- Lev Manovich, "Trending: The Promises and Challenges of Big Social Data,” in Gold, Debates in the Digital Humanities.
Wall, J. Transforming the Object of our Study: The Early Modern Sermon and the Virtual Paul's Cross Project.Virtual Paul's Cross Project.
Lev Manovich on Cultural Analytics.
Lev Manovich Projects
NEW MODELS FOR SCHOLARLY PUBLISHING
- "Case Studies" and "Provocations" section in Digital_Humanities (pp. 61-71 & 101-120
- Kathleen Fitzpatrick, Planned Obsolescence (read "peer review,” "authorship,” and "the university”)
- Roy Rosenzweig, "Can History be Open Source? Wikipedia and the Future of the Past”
- Julia Flanders, "The Productive Unease of 21st Century Digital Scholarship,” in: Digital Humanities Quarterly 3.3. (Summer 2009).
SketchUp can be used to produce and represent impressive 3D models of past and present structures, landscapes and environments. It can be useful for a number of drawing applications such as interior and architectural design, civil and mechanical engineering, digital humanities projects that recreate and interpret historical spaces and video game design.
It is a proprietary application that has a free version, SketchUp Make, and a paid version SketchUp Pro, which can both be downloaded at the SketchUp portal. The application small, and is thus easy and quick to install, as are the native 3DM files. You can produce many models without worrying about hard drive consumption. Until 2012, SketchUp was a Google application, much like GoogleEarth. It is now owned by TrimbleNation, a mapping, surveying, and navigation equipment company. It is very compatible with Google products, and can for example, be exported as a .kml which can then be uploaded into GoogleEarth.
The interoperability of the application with other digital humanities tools, such as Google Earth, and the ease of supplementing the model with primary sources and documentation, along with the flexibility of the application's comprehensive object libraries as well as the ease of use are the primary advantages of SketchUp for digital humanities work. One thing I was struck by in visiting the RomeLab's Funerary Spectacle Project was the way in which the knowledge presentation and interpretation were markedly different and more engaging than reading a paper. SketchUp's limitations for digital humanities are primarily that one needs to be sure that the research questions are suited to this mode of knowledge production.
SketchUp's website offers useful video tutorials familiarizing a user with the program's basic tools on the program’s website. There is also an Extension Warehouse, which features third-party developed plugins that address many issues that different users with different goals have encountered with the program. The DH 101: Intro to Digital Humanities course website from the UCLA Center for the Digital Humanities offers a virtual lesson on “Modeling Virtual Space,”which includes readings.
The biggest advantage of Voyant in my work is to get an overall sense of a corpus of texts—to discover interesting facets of texts or patterns that occur through texts. It is possible to track the use of a particular term or related terms through a single text or through multiple texts and visualize these results so that frequency, placement and some sense of context for these the presence for particular words or phrases. Using Voyant in this way might initiate a wider understanding of relationships among texts; one can see differences or similarities among texts from different authors, time periods, geographical locations, etc. Voyant may be used as a preliminary assessment of a set of texts to inform a larger research project. I am interested in how Voyant might be used for non-literary texts. News articles, blog posts, databases or journal articles could be tracked for instances, frequency and placement of certain words or phrases.
Voyant is extremely user-friendly and the visualizations users create are easily exportable. One can use Voyant without creating a login or downloading anything. Considering its affordances for particular types of work, limitations, however few, should be acknowledged. The most striking limitation is that texts must be uploaded to Voyant. Therefore the text must be in a digital format in which words are 'recognized.' And, the more you try to do with the tool, the messier input, visualization and analysis becomes.
The visualizations one can make with Voyant are great for starting conversations, they can facilitate analysis and observation, and can be used to clearly communicate complex relationships between texts or words. I would recommend this tool most highly for any preliminary scholarly investigation, as well as for pedagogical purposes focusing on interpreting, understanding and producing literary work.
To use Voyant, navigate to its website and past in links, pdfs or other types of text directly into the tool. One can find a number of examples vetted by Voyant's developers in Voyant's documentation gallery. One I found particularly exemplary of the advantages and optimal use of Voyant for starting conversations about themes, word usage, topics and relations between these in texts is Mark Sample's, no life, 1,000,000,000,000,000 stanzas of House of Leaves of Grass.
The “Topic Modelling Tool” (TMT), available through the Google Code Repository, provides quick and easy topic model generation and navigation. The TMT is easy to install. With a working Java application on any platform, download the TMT program and click on it to run it. Java will open the program.
The advantages of the TMT are its availability and use. Using these tutorials from Graham, Shawn, Ian Milligan, Scott Weingart, it is easy to perform topic modelling, or to text analysis of a large corpuses of textual information. It is an attempt to inject semantic meaning into vocabulary. Topics are collections of words co-occurring in documents across a corpus. You can name topics, and/or assign meaning to them to see how topics are arranged in the corpus. Because topic modeling creates models, researchers should consider the entire model as they analyze their results. Herein lies the main limitation of the tool for digital humanities work. Focusing too much on a single topic without considering the others may invalidate the results. Before you begin with topic modeling, you should seriously consider your research question and whether this type of distant reading is useful to your project. Matthew Kirschenbaum’s Distant Reading is a good place to start if one is interested in understanding when this type of work is warranted and when it is not.
A great example of the use of the TMT for scholarly research can be found in Cameron Blevins' Topic Modeling Martha Ballard's Diary. One can see how Blevins has highlighted topics found in the texts of the diaries and visualized and analyzed these findings, while maintaining a critical awareness of the limitations of the TMT as a research resource.
In the days before I used Zotero, I allotted at least an entire day of paper editing to add and perfect the footnotes and references. Zotero's greatest advantage is that it streamlines this process—one can insert already perfectly-formatted footnotes and references on the fly with the help of the Zotero Firefox browser and word processor plug-ins. This vastly simplifies and expedites collaborative work that draws from a number of sources. The "network effects" of Zotero had been a negative aspect of the application for collaborative work in the past—very few people worked with it and developed it, and thus it was buggier and harder to use with groups—but over the last few years this has changed. It is now the most popular and developed application for citation work "on the market". The fact that it is a "free and open source" application, means that it is free to use and relies on no API, so it will never be shut down when a corporate application to which the API manages access updates or shuts down. The open source nature of the application also allow a number of related developed plug-ins and add-ons to do about anything anyone envisions. For example, there are a number of additional plug-ins that allow for easy citation analyses. The application also has the ability to synchronize across devices, provided that they are desktop/laptop devices, with a single sign in username and password. This further facilitates the ease of use and collaborative nature of the applicaiton.
The drawbacks of Zotero are that while the project is open source, a number of the plug-ins can be buggy. It does not work with any browser but Firefox, which is itself increasingly buggy. It does not work with mobile and tablet devices.
To get started with Zotero, navigate to https://www.zotero.org/ in a Firefox browser and follow the prompts. Be sure that your word processing application is not open and running when you do this so that the plug in successfully updates to your word processor when you open it next. There are many tutorials online to get you on the way to successfully using Zotero. I suggest this tutorial by UCLA library.
Omeka is a useful tool for digital scholarship as it allows the ability to perform a range of tasks. One can build archives, exhibits, catalog items, add metadata and arrange items and categories as is necessary for the particular project. While Omeka is easy to get up and running, with a number of features, it is tricky to get it to function in a particular way, especially if one has a particular traffic flow or wireframe in ming. Omeka can be useful for organizing and displaying a large variety of projects. But as each digital project brings its own set of nuances, one will often need to customize the project. Coding experience is key if one wants to customize. Additionally, the current install of Omeka seems to be quite buggy, so not just coding experience is necessary, but often extensive CS experience is crucial to launch and sustain successful Omeka projects.
The advantage of Omeka lies in its ability to add metadata to items so that the project is easily accessed by other scholars and interested parties. The benefits and weaknesses of Dublin Core default metadata standard that is applied to all items cut both ways. Clearly it is good to have a metadata schema applied to digital objects for collaborative work, inter-institutional participation, general access and sustaining the archive in perpetuity. But, on the other hand, Dublin Core is a loose metadata standard. One can fill it out sloppily, according to their own rules, or differently for each item, which is, in many cases, renders the Dublin Core metadata schema not particularly useful for outside parties wishing to access and find particular items in a digitally curated archive or exhibit. A project that does a particularly nice job integrating original sources, metadata and contextualizing information to develop a pedagogical tool can be found in Making the History of 1989, a site developed by the Roy RosenxweigCenter for History and New Media (CHNM) at George Mason University.
Go to www.omeka.net and click on Sign Up. Choose the Basic plan. Fill in the sign-up form. Check your email for the link to activate your account. Then you will be up and running with the online CMS! If you want to install on a server, this process will be more arduous. I do not advise it.