普通视图

Received before yesterday美 - 弗吉尼亚大学(Scholars'Lab UVA)

An Adapter Adapter

2026年3月23日 23:43

Over the weekend, one of the amazing student Technologists, Link did a clean and reorganizing of the resin 3D printer station. The printer gives off some nasty fumes, so she was able to procure an air purifier set up just for such printers. Unfortunately, the model available doesn’t directly connect with our Prusa SL1S. Link put the air filter in place, but had to resort to duct tape to get it ti connect to the resin printer. It didn’t work.

So when I came in this morning and saw the need for an adapter to the adapter, I knew what I was going to do today!

I spent some time thinking about the best options. An insert with magnets? But how does the original adapter stay put on the new adapter?

Well, there are screw holes, how about using them? Yep, that’s the ticket. Basically replicate the bottom of the original adapter so it can screw to the new, then add a whole bunch of magnets!

Alt text

And it worked on the first try! I had to double up the magnets in order to make it strong enough to stay on, and the gasket printed in TPU could be a little bit thicker. But it was a great success!

Alt text

The models are available on Printables.com for download and 3D printing.

Gear Train Assemble!

2026年3月19日 19:01

seven versions of the holder so far

On the heels of the knowledge gained from the last post figuring out the gear train numbers, I set out to model the gears and a holder.

And immediately I was confronted with an issue with my calculations.

The Problem

At the end of all my learning and calculating, I decided:

It looks like 36mm (servo) —> 12mm|36mm —> 12mm (pinion) 
has smaller gears and gives good enough range.

One thing I forgot to consider is the length of the servo horn that is used to connect the servo to the gear. I could do without it, but trying to design and print such a small toothed hole has issues. I have seen others try and filament 3D printing does not provide fine enough detail to mesh well with the servo gear. So using the supplied horn attachment makes things much easier.

The problem, is that the horn is about 22mm in length. If my gear is only 36mm in diameter, then the horn would stick out into the gear’s teeth!

Gear is too small!

Another sidetrack bump I had to overcome was the getting the dimensions of the servo horn. The dimensions I could fine online were unsatisfactory. So I measured one myself!

SG90 servo horn

And went ahead and 3D modeled it and put the 3D model and diagram files up on printables.com for anyone to use.

With all of that info, I can then recalculate the gear train dimensions so it fits with the servo horn.

The Correct Gear Train

I played around with different settings, but it seemed the best option (that being the smallest size for the servo and large combo gears) called for a 46mm servo gear → 20mm 46mm combo gear → 20mm pinion.
Servo motor → 46
              ↓
             20 (same shaft as) 46
                                ↓
                               20 (pinion)

Sidetracked Again!

The first day of modeling, I decided to jump the Fusion 360 train and try onshape.com. It’s a web based 3D modeling and CAD tool. It has been around since 2015, and is gaining ad time lately in many of the YouTube.com videos I see, so I thought I’d give it a try. I was prepared for some learning curves and to spend some time learning a new system, but two things got me to throw in the towel after a full day of working with it; 1) I couldn’t figure out how to do something pretty simple that would take 2 minutes in Fusion 360, 2) I didn’t care for the interface; it felt too unprofessional. If TinkerCad.com is the elementary school version of CAD, it looked like Onshape.com was the 9th grade version. I did love that it was browser based. And making double helical gears was a breeze! There’s a handy built in menu for all kinds of gears. Fusion 360 on the other hand is big L in gear making. You have to import 3rd-party scripts and I can’t get any of the fancy gear scripts to work.

Like many things, it was the fact that I could get things done much faster with the tool I already knew, and I was accustomed to the interface that led me back to Fusion 360.

Making the Gear Train

I had previous attempts at designing the gear train, but I decided to start from scratch since Fusion 360 doesn’t have an easy way to just change the size of gears when using the gear script plugin thing.

Servo Gear

So, first I designed the servo gear. Pretty easy to create a 46 tooth gear with the gear script plugin thing.

servo gear with inset for servo horn

I designed a cut out, or inset, for the servo horn to fit inside. This is the easiest way to attach the gear to the servo. 3D printing these gears with filament would not have enough resolution to print the fine teeth needed to interface with the tiny default gear on the servo shaft. Much easier to use the included horn.

Combo Gear

The combo gear was pretty easy, too. Just make another 46 Tooth gear, then make a 20 Tooth gear and stack them on top of each other.

combo gear

I set the diameter of the hole through the gear at 4.2mm. That’s big enough for a M4 bolt to go through, with just enough tolerance to allow the gear to spin but not wobble.

Pinion Gear

Another very simple gear to model. There’s nothing special about this, just a 20 tooth gear with a 4.2mm diameter hole.

pinion gear

Rack

The rack is pretty straight forward. I created a 20 tooth gear, then used one of those teeth to copy down the length of the rack.

pinion gear

pinion gear

Gear Holder

This was a little bit tricker. The gears were all prototyped in one go. The first print was great. This part took 7 tries so far.

seven versions of the holder so far

I started by creating a new Assembly in Fusion. Then adding in the gears and aligning them as needed. I went with a stacked approach so as to keep the footprint as small as possible. I had previously modeled the servo motor, so I was able to add that in as well.

It was tricky to get the servo aligned with the servo gear, and then get each of the gears aligned with the ones the mesh with. In realized that if a part has the sketch turned on, then that shows up in the Assembly file. I used that to create a construction line on the servo gear and put a point where the center of the combo gear should be aligned to. Then I did the same on the combo gear to align the pinion. Then adding the holder, servo motor, and rack.

assembly, color coded

It was a lot of back and forth between the designs for the parts and the assembly to align everything correctly. But in the end I think it lines up well.

Spacers

After the first version, I realized that the gears needed spacers to keep them in place. The holder is wider than the gears. So modeling and printing a couple of spacers is pretty easy.

spacer

Somewhat Working

I connected everything up, bolted in the gears, and plugged it in. And it works… mostly.

As the video shows, the gears work, somewhat. There is a bit of jittering, which may be due to the code just rotating the gears back and forth. A more normal behavior would be moving from one angle to the next and stopping there. The servo is also not moving at a full 180°. More like 100°. This is only about 111mm of travel, not the 150mm we’re hoping for. It might be time to consider better quality servos. Perhaps some that move 270°.

It is also a pain to swap the servo motor. Perhaps a redesign is in order.


Missed the first two posts?

Funding provided through a generous grant from UVA Arts Council. Alt text

DAC-DHC Fellowship 2026-2027 CFP

2026年3月12日 12:00

The Digital Humanities Center, in partnership with the Data Analytics Center, is excited to share our Call for Proposals for the 2026–2027 academic year.

Applications are due May 15, 2026.

Project teams should consist of one faculty and one graduate student as collaborators on humanities research in the University of Virginia. We welcome proposals that explore experimental humanities research through the use of high-performance computing resources. We encourage projects that challenge traditional understanding of digital humanities (or even what has been considered humanities research), involve ethical and philosophical issues raised by new technologies, or explore new opportunities for using high-performance computing tools and techniques to better understand the human record.

Find out more on our Fellowship Page!

Jump on the Gear Train! - Critical Making 2025-2026 - Part 2

2026年2月27日 20:32

This week I started figuring out the gear train and rack and pinion needed to move the states. I have tried in the past to get AI to do this for me, but the results from ChatGPT and Copilot are all over the place and offer varying results every time I ask. I decided to use AI to learn the math instead of trying to get it to figure it out for me. Instead, I’m learning the math behind the problem to work it out myself.

Be forewarned, there is lots of math!

math?!?

TLDR;

What helped immensely was GearGenerator.com

And a spreadsheet that AI generated to calculate the travel distance of a rack based on the pinion diameter and number of rotations.

With help from GearGenerator.com I created a gear train which starts with a 36mm gear attached to a servo motor that meshes with a 12mm gear which is on the same axle (or combined) with a 36mm gear that then meshes with a 12mm pinion.

Input Value Units / Notes
Servo Gear Diameter 36 mm
Servo Gear Rotations 0.5 rev (revolutions)
     
Combo Gear Small Diameter 12 mm
Combo Gear Small Rotations 1.5 rev
Combo Gear Large Diameter 36 mm
Combo Gear Large Rotations 1.5 rev
     
Pinion Diameter 12 mm
Pinion Rotations 4.5 rev
     
Output Formula Result
Rack Travel per Rotation 37.69911184 mm
Total Rack Travel 169.6460033 mm

But how did I get this this answer?

Well, let me take you on a journey to how I figured this out and understood the problem.

What am I trying to do?

Each state will move up and down based on the data given. Prof. Rich Ross and I want this to be a pretty dramatic difference between the lowest and highest states, so we decided on 150mm of movement. After much deliberation and looking at various options, Yashu and I decided that a rack and pinion would be the easiest way to accomplish the goal. What is a rack and pinion? It’s simply a gear that moves a bar with gear teeth. It turns a rotational movement into a linear movement. The round gear is the pinion, the bar with teeth is the rack. )

A rack and pinion, a large circular gear moving a long bar with teeth

We figured the servo can rotate the pinion and move the rack attached to the bottom of the state up and down. (See last week’s post with a video of the test case.)

First Gear

The first question we needed to answer was; How far does the rack travel based on the size of the pinion?

The formula to figure this out is

\( T = Dπ \)

T is the rack travel distance, D is the diameter of the pinion, π is yummy and 3.14159265359…

It’s basic middle school geometry. The travel distance is just the circumference of the gear.

But our servo motor only goes 180-degrees, or half a rotation, or half the circumference. So we’ll have to keep that into consideration.

\( T = {Dπ \over 2} \)

The smallest size gear we want to use is probably 12mm. I just pulled that number out of the air (but actually, we’ll use it because I already know it works within all the parameters and constraints for the project 😁 ).

A pinion gear with a diameter of 12mm will cause a rack to travel \( T = 12π = 37.699mm \) But since our servo can only do a half rotation, the rack travel length is \( T = {12π \over 2} = 18.85mm \). I want the rack to travel at least 150mm. So a 12mm gear is not going to work.

In order to get the rack to travel 150mm from a single gear, we can calculate that

\( T = {Dπ \over 2} \)

\( {2T \over π} = D \)

\( {2(150) \over π} = 95.493mm \)

We would need a 95.5mm diameter pinion!

The footprint of the smallest state is about 45mm square. So everything has to fit within that area. So the largest a gear can be is about 40mm. We’ll need a different way to do this.

One option is to use multiple gears. We can use multiple small gears to create the same effect as one large gear. How does that work?

Second Gear

Well, if we just have two gears and one is smaller than the other, then if we turn the small gear one full rotation, the bigger gear will turn less than one full rotation. If the bigger gear is twice the size of the smaller gear, then for every 2 turns of the small gear, the big gear will turn once.

The blue, smaller gear is the one moving, or driving the larger gear

small turning large

The opposite is true, too. If the big gear rotates one full revolution, the small gear revolves twice.

large gear turning a small gear

Large turning small

The ratio between the sizes of the gears is called the gear ratio. The gear ratio is derived from the diameter of the gear doing the revolving divided by the diameter of the gear connected to the moving gear. We can call these the driving gear (the one attached to a servo or some other source that generated the movement) and the driven gear. So we get the formula:

Ratio (R) = Diameter (D) of driven gear (n) / Diameter (D) of driving gear (g)

\( R = {Dn \over Dg} \)

If the ratio is greater than 1, then the output is a slower turning gear, a reduction in speed.

If the ratio is smaller than 1, then the output is a faster turning gear, an amplification of speed.

Start with a smaller wheel, the bigger wheel turns slower. Start with a bigger wheel, the smaller wheel turns faster.

We also write the ratio like this R = Dn:Dg

OK. Next, how do we figure out the gear ratios for the gear train, and how many do we need?

Let’s take two gears, one is 36mm in diameter and the other is 12mm.

If the 12mm gear is the driving gear (it is connected to the servo), then the ratio is \( R = {36 \over 12} = {3 \over 1} \) or 3:1. The ratio is greater than 1, so the output is a slower turning gear.

One way to look at it is to read the ratio from left to right. A bigger number is faster than a smaller number. R = 3:1. 3 to 1. This is a decrease. From speed 3 to speed 1. Slower.

But if the driving gear is 36mm, then the ratio is \( R = {12 \over 36} = {1 \over 3} \) or 1:3, the output is a faster turning gear.

Read from left to right. 1 to 3, is an increase. From speed 1 to speed 3. Faster.

If we stick with a 12mm pinion, how many times does it need to revolve in order move a rack 150mm?

The circumference of a 12mm pinion is

\( C = Dπ \)

\( C = 12π = 37.699mm \)

so

\( {150mm \over 37.699mm} = 3.9789 \) revolutions

A 12mm pinion needs to rotate about 4 times in order to move a rack 150mm.

So if we have a second gear turning the 12mm pinion, how big does it need to be. Recall that the servo gear will only be able to rotate half of a rotation. The gear ratio, then, needs to be 0.5:4. Half a rotation to 4.

\( {12 \over x} = {0.5 \over 4} \)

Doing some cross multiply,

\( 12(4) = 0.5x \)

\( {48 \over 0.5} = x \)

and we get that x = 96. Whelp. Back to a 96mm gear. This is because the servo gear only rotates halfway.

Third Gear

So, I ask myself, “Self, if you know the pinion size and how many rotations it needs to complete, how do you calculate the other gears in the train if you don’t know exactly what sizes and how many are needed?”

I replied, “You know, that’s something that I don’t know.”

Well, I asked Copilot, too. (Copilot took into account previous conversations where I told it the gear ratio was 0.5:4 and an earlier example using 36mm and 12mm gears. And it uses gear Teeth instead of Diameter, but these are the same thing.):

Copilot said:

Great question —- this is the heart of gear‑train design. When you know:

  • the pinion diameter (or teeth)
  • the pinion rotations required

…but you don’t yet know:

  • how many gears
  • what sizes
  • or which ratios

…you can still design the entire gear train systematically using a step‑by‑step math process. Below is the cleanest and most universal method mechanical engineers use.

✅ Step 1 — Convert rack travel to pinion rotations (if needed)

If you already know the pinion must turn N rotations, skip this. If you only know the travel distance:

\( Pinion Rotations = {Rack Travel \over πD} \)

✅ Step 2 — Determine the required gear‑train ratio

This is the most important step. You need to know:

  • How many rotations the motor (input) produces

vs.

  • How many rotations the pinion (output) must make

\( R_{total} = { \text{Pinion RPM or Rotations} \over \text{Motor RPM or Rotations}} \)

Example:

  • Servo rotates 0.5 turns
  • Pinion must rotate 4.5 turns

\( R_{total} = {4.5 \over 0.5} = 9 \)

So the gear train must create a 9× speed increase (or 1:9 reduction, depending on convention).

✅ Step 3 — Factor the total ratio into small ratios

Gear trains are built by splitting a large ratio into smaller multiply‑able pieces. For example, if your target is:

\( R_{total} = 9 \)

Then possible factorizations are:

  • 3 × 3
  • 1.5 × 6
  • 1.5 × 1.5 × 4
  • 2 × 4.5
  • 1.2 × 1.25 × 3 × 2
  • (36→12) × (36→12) ← your earlier example

You can pick any combination of small ratios that multiplies to your total.

✅ Step 4 — Assign tooth counts to each ratio

Each stage ratio is:

\( R = {T_{driven} \over T_{driver}} \)

For example, suppose you choose two stages:

\( R_{1} = 3 \), \( R_{2} = 3 \)

You now choose tooth counts that satisfy each:

Stage 1: ratio = 3

Examples:

  • 12 driving 36
  • 10 driving 30
  • 16 driving 48

Stage 2: ratio = 3

Same options.

A compound gear (two gears on one shaft) lets you chain them:

(motor) 36
        | (ratio of 3)
        ↓
       [12 same shaft as 36]
                          | (ratio of 3)
                          ↓
                (pinion) 12

Ratios multiply:

\( 3×3=9 \)

✅ Step 5 — Check physical constraints

When selecting gear sizes, check:

  • Gear diameters fit in available space
  • Teeth counts match a common module or DP
  • Shaft spacing is feasible
  • 3D printing tolerances allow proper meshing
  • Required torque is acceptable
  • Compound gears are not too small or too fragile

If something doesn’t fit, adjust tooth counts but keep ratio the same.

✅ Step 6 — Iterate until a valid mechanical layout exists

This is normal: you will often try 3–6 combinations until the gear spacing, shafts, and reductions all work.

🧮 A Worked Example (General Method)

Suppose:

  • You want 5 pinion rotations
  • Input shaft makes 1 rotation
  • Total ratio required:

\( R_{total} = 5 \)

Factor it:

  • 5 = 5
  • 5 = 2.5 × 2
  • 5 = 1.25 × 4
  • 5 = 1.666 × 3

Pick: 2.5 × 2

Assign tooth counts:

  • Stage 1: 12 → 30 (ratio = \(30 \over 12 \) = 30:12 = 2.5)
  • Stage 2: 15 → 30 (ratio = \(30 \over 15 \) = 30:15 = 2.0)

Total:

\( 2.5 × 2 = 5 \)

Build it as a compound:

(input shaft) 30T
              ↓
              12T -- same shaft -- 15T
                                    ↓
                          (pinion) 30T

Pinion rotates 5× per 1 input rotation.

🎯 The Key Insight

You don’t need to know the gear sizes up front. You only need to know the total ratio, and then you can:

  1. Factor the ratio into small ratios
  2. Assign teeth counts that match each small ratio
  3. Assemble them into a compound gear train
  4. Adjust until it fits mechanically

This process is how all engineered gearboxes are designed.

Fourth Gear

OK. That’s cool. Now to ask Copilot to figure it all out for me.

“Copilot, walk me through each step of how to calculate a gear train to drive a pinion 4 rotation with a servo motor generating 0.5 rotations and the largest a gear can be is 40mm, and using the smallest number of gears possible, and the gears should be easily 3D printed.”

Copilot said:

Reasoning completed in 3 steps

Absolutely-—let’s design a compact gear train that makes a pinion turn 4 rotations when your servo only turns 0.5 rotations, with the constraint that no gear may exceed 40 mm diameter, using the fewest gears possible and keeping everything easy to 3D print.

🎯 What we need (in plain math)

  • Input (servo): 0.5 rotations
  • Output (pinion): 4 rotations
  • Required speed increase:

\( S_{total} = {\text{pinion rotations} \over \text{servo rotations} = {4 \over 0.5} = 8 \)

So the gear train must provide an 8× speed increase (equivalently, a 1:8 reduction in the “driven/driver” convention).

  • Constraint: largest gear ≤ 40 mm diameter.
  • Goal: Fewest gears (i.e., 2 stages / 4 gears total with one compound shaft).

✅ We’ll target a two‑stage compound gear train (4 gears total). If we can’t meet printability constraints, I’ll give a 3‑stage fallback.

🔧 Design conventions (for easy 3D printing)

  • Use equal gear module (e.g., module m = 1 mm/tooth) so:

\( \text{Pitch diameter (mm)} ≈ m × \text{tooth count} \)

With m=1, a 40‑tooth gear ≈ 40 mm diameter—- perfect for your 40 mm limit.

  • Common printable tooth counts (no special profile shift): 12, 15, 16, 18, 20, 24, 30, 36, 40.
  • Pressure angle: 20° (typical for printable involute gears).
  • Try to avoid very small tooth counts (<12) to reduce undercut and fragility.

🧮 Step‑by‑step method

Step 1 — Decompose the total ratio into stage ratios

We need:

\( S_{total} = S_{1} × S_{2} = 8 \)

We’ll look for two reasonable stage increases (each <4×) that multiply to 8.

A convenient split is 3.333… × 2.4 ≈ 8.

Step 2 — Map stage ratios to tooth (or diameter) ratios

Speed increase for a stage (driver → driven) is:

\( S = {D_{driver} \over D_{driven}} = {T_{driver} \over T_{driven}} \)

(Assuming same module.)

So we want (example picks):

  • Stage 1 \( S_{1} ≈ 3.333 \): 40T → 12T (\( 40 \over 12 \) = 3.333…)
  • Stage 2 \( S_{2} = 2.4 \): 36T → 15T (\( 36 \over 15 \) = 2.4)

Total:

\( S_{total} = {40 \over 12} × {36 \over 15} = {10 \over 3} × {12 \over 5} = {120 \over 15} = 8 \)

Bingo.

Step 3 — Lay out the gear train (compound shaft)

[(A)40T Servo]  
    |
  meshes                        Stage 1 (40→12)                      
    ↓ 
[(B)12T same shaft as (C)36T]  
                        |
                      meshes    Stage 2 (36→15)
                        ↓
                 [(D)15T Pinion]
  • (A) is connected to the servo motor.
  • (B) and (C) are rigidly fixed on the same shaft (compound).
  • The pinion is gear (D) = 15T in this design.

Step 4 — Verify motion requirement

Servo turns 0.5 rev → pinion turns:

\( 0.5 × 8 = 4\text{rev} \)

Meets the spec. ✅

Step 5 — Verify size constraints (≤40 mm)

With m = 1 mm/tooth:

  • 40T → ~40 mm
  • 36T → ~36 mm
  • 15T → ~15 mm
  • 12T → ~12 mm

All ≤ 40 mm. ✅

Step 6 — Check printability & robustness

  • Smallest gear is 12T—printable in PLA/PETG with 20° PA; 15T pinion is sturdier and meshes well.
  • Keep face width ~6–10 mm for strength (depends on load).
  • Add 0.1–0.2 mm backlash target (or scale tooth thickness down ~1–2%) to compensate for printer tolerances.
  • Use fillets at hub/arm junctions; ensure set‑screw hub or D‑shaft interface.

✅ Minimal‑gear solution (recommended)

Two stages / four gears (one compound shaft):

  • Stage 1: 40T → 12T (3.333× speed increase)
  • Stage 2: 36T → 15T (2.4× speed increase)
  • Total: 3.333 × 2.4 =
  • Result: Servo 0.5 rev → Pinion 4 rev

This uses just 4 gears and fits the 40 mm cap.

Nice! If we have a 15mm pinion rotating 4 times, that moves the rack

\( T = π × D × 4 \)

\( T = π × 15 × 4 = 188.5mm \)

I plugged these calculations into a spreadsheet (rack_travel_calculator.xlsx) to help me play around with the gear sizes. I wanted something smaller than 40mm.

It looks like 36mm (servo) —> 12mm|36mm —> 12mm (pinion) has smaller gears and gives good enough range.

That’s a 3 × 3 = 9 total ratio. Better than the 8 needed.

12mm pinion rotating 4 times moves a rack

\( T = π × D × 4 \)

\( T = π × 12 × 4 = 150.796mm \)

That’s just perfect, but in a gear train, the pinion actually rotates 4.5 times, so the rack will move 169.646mm. Even more perfect!

gif of gear train at this ratio

Gear train

OK. Now with the math and numbers all figured out, and the some additional tips from Copilot about layout, it’s time to start modeling a gear train!

🧰 Practical layout tips

  • Center distance between meshing gears:

\( c = {m \over 2}(T_{1} + T_{2} ) \)

With m=1, 40T–12T center distance = (1/2)(52) = 26 mm;

36T–15T center distance = (1/2)(51) = 25.5 mm.

  • Maintain parallel shafts and allow axial shimming (washers) to tune backlash.
  • If you’re driving a rack, ensure the final gear module matches the rack’s module (e.g., both m=1).
  • For SG90 servos, print large driver gears with a robust hub and insert brass heat‑set inserts for screws if possible.

Phew! That was a lot! Next week, 3D model the gear train!

Funding provided through a generous grant from UVA Arts Council. Alt text

Read Part 1 - The Beginning

Critical Making Fellowship 2025-2026: Part 1

2026年2月20日 20:54

Introduction

Beginning in 2023, the Makerspace has funded a graduate student, usually in the humanities, to design and complete a physical project that represents or comes from their humanities research. Named the Critical Making Fellowship, the previous projects were Alexa Luborsky’s “Diaspora Poem” (2023), and Holly Zhou’s “Memory Maps” (2024 & 2025). This years Fellowship began as a collaboration with Statistics Professor Rich Ross and Makerspace Manager Ammon Shepherd. Generously funded by an UVA Arts Council Grant, the 2026 project was able to hire Yashu Tang for the Fall 2025 semester to get the process started. Below is the write up of the project and contributions from Yashu.

Project Overview

This project aims to visualize electoral voting patterns in previous presidential elections using an interactive data-sculpture. Each state is represented with a cluster of hexagonal cells equal to the number of electoral votes allocated to the state.

"2012 US Electoral map"

The state-block will have two actions in response to input data:

  1. light up blue or red depending on the party of the winning candidate and
  2. raise or lower itself proportional to the margin of victory.

The data on state-level electoral votes is from UCSB’s American Presidential Project website.

Data

The data has already been scraped from UCSB’s website and contains data on the 1924-2024 elections. They are stored in the election data folder on the shared drive, with a separate .csv file for each election year. Files are named with the convention Election_####, e.g. Election_1968. Years with atypical electoral votes have been noted in the data description document, which can also be found in the election data folder.

The data columns are as follows:

State: State where the data was collected

Total: total votes tallied for the state

Popular_D: Number of popular votes won by the Democratic candidate

Prop_D: Proportion of total votes received by the Democratic candidate

EV_D: Number of electoral votes won by the Democratic candidate

Popular_R: Number of popular votes won by the Republican candidate

Prop_R: Proportion of total votes received by the Republican candidate

EV_R: Number of electoral votes won by the Republican candidate

  • If a 3rd party candidate won electoral votes, the votes will be recorded with similar conventions as above, with “_3rd” as the suffix.

EV_Outcome: Column with electoral vote outcomes. Data is stored with integer values of electoral votes, followed by the party they were cast for, separated by spaces. E.g. 3 D 0 R 0 I, where 3 votes were cast for the Democratic candidate, and 0 for the Republican and Independent/3rd party candidate.

Margin: Margin of victory, calculated using

MOV = (Raw margin of victory) / (Total votes in state)

where the raw margin of victory is the difference in popular votes between the first and second place candidates in the state.

Hardware

The state-blocks will be 3D printed by Ammon at the Makerspace and mounted on a similar 3D printed rack-and-pinion mechanism to achieve the raising and lowering.

The vertical movement will be powered by 180-degree rotation Servo motors. They were chosen because their degree of rotation can be specified in code. This allows for an easy transition from the MOV value in the data into Servo motor directions. The Servo motors will be connected to the Adafruit 16-channel PWM bonnet, and a brief tutorial on connecting and using the bonnet can be found here. For this project, the bonnets will use the 3x4 right angle male headers, so the Servo motors can all interface to the pi.

The light-up portion of the project will be completed with the 5050Neopixels, whose product specifications can be found here. A pdf of the specs will also be in the shared drive. The pin configuration can be found on page 5 of the pdf. When connecting the Neopixel to the bonnets, the signal pin should connect to pin 2 (DIN), the power pin should connect to pin 3 (VDD), and the ground should connect to pin 1 (GND).

  • It should be possible to chain multiple Neopixels by chaining pin 4 (DOUT) to pin 2 (DIN) on another Neopixel, but I haven’t had a chance to test it.

Software

There is some Python code for powering the Servos simultaneously. This is on the Raspberry Pi and should be stored in the “Code” folder. The packages used for powering the Servos and Neopixels must be used in a virtual environment. On Thonny, you can go to “Run”, then “Select interpreter”, then choose the venv that is in the “Code” folder. Not much has been done for the software portion of this project.

February 2026 update

Further consideration about the project led us to expand the vision of the project. Instead of being tied to electoral data, the map will be able to use any two data points to compare states. The LEDs will not be exclusively red or blue, but RGB LEDs will be used to allow a range of colors.

As of February 2026, the hardware side of things has improved a bit. California was chosen for the test case as it is the largest with the most electoral votes. This would test the capability of the inexpensive 180 servo motors to move the 3D printed states.

Originally, it was hoped to use the output of data from the R statistical program to generate the 3D models for printing. The resulting hexagons from this workflow were inconsistent.

Map showing inconsistent hexagon and height

An additional issue, was that the generated file created a map with states at varying heights. We are going to account for this in the hardware by moving the states up and down, so they all need to be the same height.

Modeling each state by hand in Fusion 360 is the solution.

Model of California

Fusion 360 was also used to model the servo holder and a rack and pinion gear set to move the state vertically.

Model of the rack and pinion gear

To test the set up, two pieces of plywood were cut to house the gears and the state. Temporary supports were 3D printed to hold the two pieces of plywood together.

A test case using California can be seen in the video below.

Funding provided through a generous grant from UVA Arts Council. Alt text

Read Part 2 - Jump on the Gear Train

Light Up Your Heart

2026年2月13日 21:30

A great Valentine project to brighten the day for your special someone.

Materials Needed

Step 1 Cut Out Cardboard Shape

Cut a heart shape out of cardboard. You can do any shape you want, though.

Cardboard heart

Step 2 Apply Copper tape

Make a “track” of copper tape around one side of the heart. The rails of the track should not touch. One is for the negative, the other for the positive part of the circuit. Leave a piece unstuck. This will be used for the “button”.

Cardboard heart

Step 3 Poke holes for LEDs

Poke holes in the heart for the LEDs. You can do as many or as few as you want. This one is labeled with - and + to make sure the LEDs are placed correctly.

Cardboard heart

Step 4 Insert LEDs

Push the LED legs through the holes. The short leg is negative ( - ) and the long leg is positive ( + ).

Cardboard heart Cardboard heart

Step 5 Secure LEDs to Copper Tape

Bend the legs of the LEDs so that the negative leg lays flat on the negative rail and the positive leg lays flat on the positive rail. Tape them down so the stay in contact with the rails.

Cardboard heart

Step 6 Make Battery Holder and “Button”

Use cardboard or cardstock to make a ring the size of the battery. The ring should be at least three times as tall as the battery.

Cardboard heart

Fold a piece of cardboard or cardstock to fit like a bridge over the battery ring. The positive ( + ) side of the copper tape will attach to the underside of the bridge to act as a button.

Cardboard heart

Step 7 Secure Battery and Button to Heart

Use Masking tape to secure the battery ring and bridge (button) to the back of the heart. The button should rest on the negative ( - ) rail of copper tape. The positive ( + ) rail of tape should be attached to the bottom of the bridge.

Cardboard heart

Test the LEDs by pressing the button. They should all light up. If not, check to make sure all of the LED legs are touching the correct rail and are firmly taped down.

Step 8 Wind with Yarn

Finally, wide yarn around the heart until all of the cardboard is hidden. Secure the yarn on the back by tying a knot.

Cardboard heart

Press the button on the back to make the LEDs light up!

Cardboard heart

Building Folkloric Futures: A Behind-The-Scenes Look At Designing A Speculative Digital Archive Of Caribbean Storytelling

2026年2月2日 13:00

What would it mean to design a digital archive that doesn’t simply preserve the past, but actively looks both backward and forward—one that helps us think differently about how the past and future speak to each other?

This question kept coming up as I began building Folkloric Futures, an Omeka-based digital archive that brings together Caribbean folklore, literature, visual art, and community storytelling. From the beginning, I knew I didn’t want to create an archive that treated folkloric figures as static artifacts or neatly categorized traditions. These stories are alive. They change depending on who tells them, where they’re told, and why they’re remembered.

Folkloric Futures started as a way to test that idea in practice.

I’m a PhD candidate in English at the University of Virginia, and my academic research focuses on Caribbean literature and speculative storytelling. My dissertation looks at how folkloric Caribbean figures like the soucouyant, douen, jumbie, and moko jumbie move across novels, oral traditions, and visual culture—and how those movements shape ideas about identity, belonging, and survival. But I kept asking myself a bigger question: what would it look like to take those ideas out of a dissertation and build something people could actually explore, use, and contribute to?

This project is my attempt to answer that.


Why folklore—and why speculation?

There’s a line from Martinican writer Patrick Chamoiseau that I return to often: “Their eyes had seen so much that they no longer distinguished between dream and reality” (from Creole Folktales). For me, that line captures what folklore does best. These stories don’t offer neat explanations of the past; they give people ways to live alongside it.

Caribbean folkloric figures are full of contradictions. The soucouyant is frightening, but also powerful—an older woman who sheds her skin and moves through the night as fire. The douen is a child spirit who lingers at the edge of the forest, both vulnerable and unsettling. The moko jumbie towers above the crowd during Carnival, part guardian, part performer. These figures carry history in ways that are felt rather than explained—through fear, humor, warning, and care.

Speculation gives me a way to work with that complexity. Instead of asking, “What is this figure?” speculation asks, “What can this figure do?” What stories does it carry? What futures does it hint at? What happens if we design an archive that allows those meanings to shift rather than settle?


Designing an archive that tells stories

At a basic level, Folkloric Futures is a digital archive built in Omeka. It brings together literary excerpts, oral histories, visual art, and creative reinterpretations of Caribbean folklore. But very quickly, I realized that the hardest part of the project wasn’t collecting materials—it was deciding how the archive should feel.

Most digital archives aim for visual consistency and neutrality. This one couldn’t. These figures don’t belong to a single mood, palette, or structure. So instead of designing one uniform template, I let each figure shape its own page.

The soucouyant’s page uses deep violets and ember-like gradients to suggest heat, night, and movement. The douen’s page is quieter, built around forest greens and shadow, echoing stories of childhood loss and ecological liminality. The moko jumbie’s page is bright and vertical, drawing on the energy of Carnival and the figure’s role as a watcher from above.

These choices aren’t just aesthetic. They’re a way of letting design do some of the storytelling work—guiding how people encounter each figure before they ever read a word.


Metadata, but make it human

Metadata turned out to be one of the most surprising parts of this project.

Folkloric figures don’t stay in one place. Their names change across islands, and their meanings shift between generations. Trying to force them into fixed categories felt like losing what mattered most about them. So instead of treating metadata as a rigid checklist, I started treating it as something closer to a set of prompts.

Instead of asking only where a figure comes from, the archive makes room for multiple origins. It doesn’t settle on a single interpretation, but allows overlapping themes and contradictions to sit side by side. In this way, metadata becomes another storytelling layer—one that reflects how folklore actually moves through the world.


Ethics as something you build in, not add later

Because Folkloric Futures works with living traditions, ethics could not be an afterthought. Early on, I wrote a project charter to guide decisions about collaboration, credit, and care.

That meant committing to a few core ideas:

  • contributors are collaborators, not data sources
  • artists deserve material compensation
  • people should control how their stories are shared
  • accessibility matters, especially for oral and low-bandwidth storytelling

These principles shape everything from submission forms to design choices. Ethics, in this project, isn’t a separate section—it’s part of the infrastructure.


Teaching as part of the archive

Folkloric Futures is also a teaching project. In Spring 2026, I’ll be running a workshop series that invites participants to experiment with archives, oral storytelling, mapping, and creative worldmaking—using both digital tools and low-tech alternatives.

The goal isn’t technical mastery—every workshop includes a low-tech version by design. It’s about helping people see archives as places where stories can be questioned and expanded. In this way, teaching feeds back into the archive, and the archive becomes something shaped by ongoing learning rather than a finished authority.


What building in public has taught me

Working on this project has changed how I think about archives. I’ve learned that design decisions are never neutral, that aesthetics shape interpretation, and that building in public requires patience, iteration, and accountability.

Most of all, it’s reinforced my belief that speculation isn’t escapism. In Caribbean storytelling, imagination is often a survival strategy—a way of carrying history forward without being trapped by it. Folkloric Futures tries to honor that by creating a digital space that doesn’t just preserve stories, but invites people to think with them.

Because an archive, at its best, doesn’t just hold the past. It helps people live with it.

The Slideshow And The Video Essay

作者leo-palma
2026年1月16日 13:00

In my discipline, art history, the slide show is not only an intrinsic part of teaching, but it shaped the discipline’s methods from its inception. Practices through which art historians are taught to understand art – like visual analysis and comparative analysis– rely on one or more reproductions of artworks to be available to students in the classroom. Photographic slides have been used in art history since the early twentieth century. Then why are most art history sideshows so plain? Why, now that technology has advanced, have the conventions of the art history slideshow stayed largely the same?

Outside of academia, video is one of the main ways people consume information. Currently, this is largely through online video. Online educational programs vary greatly in quality and accuracy, but many educated individuals who operate outside of academia have taken to platforms like YouTube to share their knowledge and their analysis on a variety of topics. Some of my personal favourite video essayists hold undergraduate and graduate degrees in philosophy. They often start their videos by addressing a current topical issue or event as a departing point to present different philosophers’ ideas and concepts. While these videos use some academic practices – like citing sources on the top right of the video, or providing a bibliography in the description – their presentation style is definitely not academic. Their videos contain elaborate costumes, makeup and sets; their presentation style is highly emotive: they use humor, plot twists, and personal experiences to make complex topics more approachable. On Youtube, maintaining viewers’ engagement and retention is paramount for the monetisation of a channel. This is often achieved by favouring material that emotionally, rather than just intellectually, engages the viewer. Video essays can be long, sometimes multiple hours. The audience is almost by definition assumed to be a distracted one. Viewers are doing chores, or cooking, or on their commute. To keep them interested and listening, one needs to find ways to not only make the content relevant to them, but create emotional resonance and construct exciting visuals and sound design to highlight important moments. This highly emotional way of presenting differs from academic rigor, expectations of objectivity, and separation between the self and one’s field of study.

While in certain academic fields – largely feminist, queer, black and other minority studies – this requirement has been challenged, it is still an underlying practice in disciplines like art history. Yet, we all spend years of our life studying and researching this material because we love it, because we find it interesting and relevant. So why is it so difficult to communicate passion and enthusiasm in public facing presentations?

I believe there are at least two practices from the YouTube video essay that could translate to the academic presentation: encouraging emotional engagement, and providing variety. In regard to emotion, for example: Where can I script a joke or acknowledge the humor of an aspect of my research? Where can I leave out a conclusion or some information, to surprise my listener with it later? Where can I reenact a moment of my research when I found out something exciting or unexpected? Where can I peel back the curtain a bit on the process of my research, so that the listener feels involved in the narrative I’m presenting, and not just a passive bystander? Importantly, how can I use the tools at my disposal to create these moments of emotional engagement? Where can I hide something on the next slide, as to not give away my findings before I get to them? Where can I include a photo that brings up a good anecdote from my time in the archive, or at the collection?

Variety and dynamism can also be accomplished through the slides. Why am I the only one talking? Do I have audio or video clips that I can use? Can I call onto someone in the audience to answer a question? I often find it difficult to not sound monotone when I’m presenting. Embedding audio or video or planning for moments of interaction with the audience could help to break up the sound of my voice. There are also ways to make slides more dynamic and visually interesting without making them look unprofessional. If I need to talk over an image for a long time, how can I animate it in some way? Can I zoom in onto details of the painting as they come up in the talk? Can I show Calder’s mobile sculpture move? Can I play a video of the performance with the sound off as I’m talking about it?

The workshop I’m planning in Spring is an attempt to think about emotional engagement and diversity in the slideshow presentation as a group, without being prescriptive on which methods to adopt and when. I am aware that our cohort works in a variety of fields and from multiple identitarian positions that affect how we are perceived in a professional setting. Some of us can take more liberties when it comes to academic speaking and some of us cannot. But I am interested in finding out what people think about this comparison. Which methods of the video essay can apply to the academic presentation with the slideshow? I plan to use clips from videoessays to show some of these techniques. I’ll then play a clip of an academic presentation, and ask the people in the room to draw a storyboard of it. Storyboards, where a drawing of a shot and notes are put side to side, are very similar to the slide and script style of preparing for a talk. However, they force one in the position of the audience member instead of the presenter. They allow us to think in more detail about framing, and movement, and how to direct the attention of the viewer. I hope this exercise will inspire the group to think about some ways in which the slideshow can improve our ability to communicate our research to a variety of audiences.

Vector/Vectoria, be yourself at these GIS Workshops

2026年1月15日 13:00

Get over to Chez Scholars’ Lab for the hottest GIS workshops in town. And fear not, our references may be from the nineteen hundreds, but much like the themes of that movie, the content of these workshops is ahead of its time.

Spring semester is when we shift gears and turn our workshop focus to ArcGIS Online (AGOL), Esri’s GIS solution for the cloud. AGOL is browser-based, eliminating any Windows vs. Mac shenanigans, and allowing us to provide temporary access to members of the community that don’t have UVA credentials. Not sure what the difference is between ArcGIS Pro and ArcGIS Online? Mark Patterson sums it up well here. Still not sure? As always, feel free to contact us with any questions.

  • Sessions are one hour and assume participants have no previous experience using GIS. These will be hands-on demonstrations with step-by-step tutorials.
  • We will meet in-person in the Scholars’ Lab (Shannon Library 308) on Wednesdays from 2PM to 3PM, and openly welcome the UVA and larger Charlottesville community.
  • Walk-ins are welcome, but due to limited seating, we strongly encourage registering using the links below or at our Events page. If you’re waitlisted, please contact us at uvagis@virginia.edu.
  • Class materials will be made available on the Spring 2026 Workshops tab of our Teaching Resources page.
  • We will not be offering a virtual option this semester. We apologize for any inconvenience.
  • Please note, these workshops are not intended for course instruction. If you’re here at the direction of your professor, or if you’re teaching a class and would like to include GIS instruction, please contact us at uvagis@virginia.edu.

January 28th - Introduction to ArcGIS Online

ArcGIS Online is the cloud-based younger sibling of ArcGIS Pro. It can’t do some of the less flashy, GISy kind of things, but it’s in the cloud, it’s connected, which adds all the hip functionality we’ve come to expect. With ArcGIS Online, you can find and create spatial data, maps, and applications. Access a limited but powerful set of analysis tools that take advantage of cloud computing and pre-configured data and resources. Share and collaborate with small groups or with the world. It’s an easy-to-use entry into the world of GIS, all from the comfort of your browser.

Register Here!

February 4th - Find and Create Spatial Data

Start your data search with AGOL’s collection of geographic information from around the globe. Not finding the data you seek? We’ll cover how to create your own data, and how to share it with the world.

Register Here!

February 11th - Collect Data in the Field

Whether you are crowd sourcing spatial data or performing survey work, having an application that records location and uploads data directly to a mapping application is incredibly useful.

Register Here!

February 18th - Web Mapping and Visualization

Pop-ups, filters, clustering, advanced symbology. There are many ways to personalize your maps, enhancing the story your data tells. We’ll dive into some of the more advanced functionality that allows you to fine-tune your Web Maps. Don’t be put off by “advanced”, though, this session is beginner friendly.

Register Here!

February 25th - Spatial Analysis with ArcGIS Online

Perform basic analysis with tools like Buffer and Spatial Join. Or, enhance your data, taking advantage of the always up-to-date elevation, streets, and demographics data available in ArcGIS Online with tools like Create Viewshed, Find Nearest, and Enrich. Come for the learning and stay for stories about the old days when we had to create all that data ourselves. Uphill. Both ways!!

Register Here!

March 4th - Spring Break, No Workshop!

Enjoy a break. We’ll see you next week!

March 11th - Instant Apps and More

Dip your toes into the world of web GIS applications with AGOL’s quick-configure app builders. We’ll explore a few of the many options for enriching your map and data with focused applications. From time animation to interactive multimedia, these easy-to-use templates and builders take your data to the next level.

Register Here!

March 18th - Introduction to ArcGIS StoryMaps

StoryMaps is a website builder that makes it easy to add narritive and multimedia context to your ArcGIS Online maps. Whether telling a story, giving a tour, or comparing historic maps, StoryMaps is an easy-to-use tool that allows you to create a polished web presentation.

Register Here!

Preparing for Leave

2026年1月14日 13:00

My wife and I are expecting our second child in just a few weeks, which means that I am gearing up for a new and chaotic phase of life. As a part of the preparation, I’m doing everything I can to keep things running smoothly for student programs in the Scholars’ Lab while I’m out. I set up a process for doing so when I took leave two years ago for our first child, so I’m not exactly working from scratch. Here’s how I’m preparing for my leave this time around to make things easier for my coworkers who will be keeping things going in my absence.

Give notice early

Everyone has different interlocking reasons for when they give notice to their team. Those reasons might be medical, personal, or professional. Given my own particular circumstances, I let my immediate collaborators in the Scholars’ Lab know fairly early, several months before I would be out. With this knowledge well in advance of the due date, my collaborators knew that I was taking steps to accommodate my absence. I also notified students who would be impacted. The dates I chose to take these steps were selected carefully in conversation with my supervisor, who helped me decide who needed to know and when.

Identify areas of responsibility

One of the first tasks in preparing to unplug for two months was to list my tasks, differentiating between major ongoing initiatives and smaller one-off items. This process helped me to create a to-do list such that I can make progress on my leave in a controlled manner. Otherwise, one can get lost in an anxiety spiral feeling like there is already more to do. I identified the Praxis Program, the DH Fellowship Committee, and our summer programs as primary initiatives in need of continuity.

Wrap up what I can

For smaller projects, I sprinted over the past two months to finalize whatever I could. Rather than working with a particular student on a weekly basis, for example, I set a date for a multi-hour meeting where we could make significant progress on their project. I set early writing goals for myself to meet deadlines in advance. And I took advantage of the slow down between semesters as space in which I could get ahead.

Establish points of contact for what I can’t

Some projects and initiatives will inevitably roll over through my leave. Working through my list, I worked with my supervisor and coworkers to identify people whom might be willing to take on specific pieces of my work. This process always involved asking my collaborators a series of questions: what do they need to feel comfortable? What can they do? What do they feel uncomfortable with? Who else might make sense for particular tasks?

Document everything

So much of the work I do exists in my head. Workflows, points of contact, procedures, norms. I tried to write as much of this down as possible so that someone stepping in would know exactly what to do and when. Winnie E. Pérez Martínez has been exceptional at working on this with me as a student worker, especially in regard to clarity and formatting. Winnie has a special talent for taking an enormous brain dump from me and assembling it into a coherent, less intimidating guide. I have learned a lot from her!

Put guardrails on future commitments

If possible, I tried to stop planning major commitments that would take place a couple weeks before the due date. At the very least, when I agreed to something, I made it clear that I might unexpectedly withdraw with little notice. I am also giving a couple weeks buffer before scheduling new commitments after I return in April. After all, babies have their own schedules in mind, and postpartum life is enormously challenging and complex. It’s impossible to know what our lives will be like for the next several months, and I tried to be honest about these facts with everyone involved.

Caveats

Everyone deserves the time and energy that parental leave allows to refocus on their personal life and meet the needs of a difficult transition. Everyone deserves coworkers kind enough to help them make space for their family. But I also know this is not the norm. I am enormously fortunate and privileged to have such support. That being said, I hope that what I’ve outlined above can be helpful even for those who do not possess such a robust support system. In those cases, this post might offer a rough guide for how to advocate, push back, and find small space for what you need in infrastructure that might not otherwise allow it.

3D Printed Cityscape

2026年1月8日 13:00

A screenshot of a map

This is a guest post by Makerspace user, Yifan Liu. During the 2025 Fall semester he developed and created a number of amazing cityscapes.

3D Print Tutorial: Cityscapes

By: Yifan Liu (yl3gm), UVA Graduate Medical Student

Creating a 3d Model

  1. Open the online software Map2Model. You will be presented with the following interface. A screenshot of a map
  2. Enter which city or area you would like to model search bar. Then select an area of the map to be modeled.
    A screenshot of a map
  3. Adjusting settings: There are many customizable settings that can be adjusted in the right-hand menus. Below are a few example settings that I commonly choose.
    • Base:
      • Map size: 152mm
      • Base layer: 4mm
      • Topography: Disable if modeling a relatively flat area to reduce complexity
      • Frame: Off
    • Features:
      • Roads
    • Include footpaths: Enable if you want to include detail of hiking trails or parks for example. Disable if you want to reduce file size and processing time
    • Road Types: Play around with disabling different road types for effect or reducing complexity - Grass: Off - Buildings:
    • Buildings Scale: 1.2x – 1.5x - Sand: Off - Piers: Off
  4. Press “Generate Mesh” to generate a 3d model of your selection A screenshot of a map
  5. Click the dropdown menu next to “Export 3MF”. Click “Export STL”
    • Note: you can also export as 3MF to retain features like roads, water, and buildings as separate objects.

Editing and Refining (optional)

You may notice that some structures in the model are not correctly detailed or rendered. If you want to add more detail, import your model into a 3d modelling software. For this example, I used Blender. More detailed instructions on how to use Blender can be found online.

  1. Download or create 3d models of desired buildings. Adjust to correct scale and position and place over existing building on model.
    A screenshot of a map
  2. Delete undesired geometry or vertices on cityscape.
  3. Export file as an STL for slicing and printing.

Slicing

  1. Open STL file in PrusaSlicer
  2. Adjust settings by going to “Print Settings”:
    • Print settings: 0.20mm Structural
    • Brim: 4mm
    • Infill: 10%
  3. Multimaterial Printing: (optional)
    • Click on the STL object and click on Multimaterial printing on the left-hand menu icons
    • Use the Smart fill tool to paint the desired colors. I prefer to paint water features blue and all other features white.

A screenshot of a map

Printing

  1. After slicing export your file to the desired printer. If using Multimaterial printing, use either Kermit (Prusa MK4 MMU3) or Big Bird (Prusa XL).
  2. Load and select desired filaments on printer. Make sure to check that the correct filament is paired with the correct extruder.
  3. Print and wait!

A screenshot of a map

Seeing, Describing, and Imagining: Human and Machine Vision in the Humanities

2026年1月3日 13:00

Framing the Workshop: Vision, Interpretation, and Context

In recent years, digital tools have quietly transformed how we experience and interpret images in museums, classrooms, and research settings. As an art historian working at the intersection of art history, digital media, and visual culture, I am drawn to examining how people translate visual experience into words, and how that process compares with machine analysis of the same images. I am especially interested in creating spaces that invite us to pause, pay closer attention, and make the act of interpretation visible, rather than treating images or technologies as self-evident.

Seeing, Describing, and Imagining originated from a simple, low-stakes classroom exercise I first encountered while serving as a teaching assistant in a course on formal and visual analysis taught by my advisor. Watching students work through the challenge of turning what they were seeing into words made it clear how tentative and negotiable description can be. That experience stayed with me and prompted me to rethink the exercise beyond the classroom, adapting it into a workshop format.

The workshop aims to create a shared, practice-based method for visual analysis that can be applied in various settings, from visual analysis courses to digital humanities labs, while staying rooted in art-historical approaches to looking.

From Looking to Language: Description and Interpretation

The workshop is conceived as a hands-on, collaborative way of exploring how images move between seeing, describing, and imagining. It is designed to begin with a simple exercise. Participants would look closely at an artwork and translate what they see into words. Working in pairs, one person would study the artwork and describe it in detail, while the other would create a quick line sketch using only that description, without ever seeing the image itself.

This phase aims to slow the process in a constructive way. Participants are encouraged to reflect on the act of describing itself: What do you choose to mention first, and why? Which parts of the artwork are hardest to put into words? These questions are designed to show that description is never neutral. Emphasis, order, and omission all influence how an image is understood.

When sketches and original artworks are placed side by side, the workshop is designed to shift from creating to comparing. Instead of viewing differences as mistakes, participants are encouraged to explore what moments of similarity and difference may reveal about the connection between image and text. The aim is not to fix these gaps but to use them as a way to think about how seeing, knowing, and describing are linked in art history practice.

Human–Machine Translation: AI, Images, and Visual Convention

Starting from this analog foundation, the workshop is structured to move into a digital phase by introducing AI text-to-image systems. Participants would revisit and refine their descriptions before entering them into an AI model such as DALL·E or Adobe Firefly. The resulting AI-generated image would then be placed alongside the original artwork and the participant-created sketch as a third object for comparison.

Rather than evaluating which image is better or more accurate, this stage emphasizes observation. Participants are encouraged to ask what kinds of visual patterns might emerge across AI-generated images. Which elements seem emphasized, simplified, or made more uniform across different outputs? Looking across multiple results is meant to create space for noticing patterns without assuming in advance what those patterns will be.

Existing scholarship by authors such as Kate Crawford, Safiya Umoja Noble, Ruha Benjamin, and Johanna Drucker suggests that AI systems are shaped by the datasets they are trained on, the ways information is classified, and the cultural assumptions embedded in those systems. Drawing on these works, the workshop is designed to create conditions where such influences could become visible through hands-on engagement rather than explanation. As participants compare images, the process opens up the possibility of exploring whether familiar visual conventions emerge, particularly when prompts involve artworks or visual traditions that are not widely represented in large image datasets. What becomes noticeable is deliberately left open and expected to take shape through comparison rather than as a predetermined outcome.

The workshop also introduces a reverse process, moving from image to text. Participants would upload an artwork into an AI vision tool and examine how the system translates the image into language. Reading these AI-generated descriptions alongside participants’ own interpretive accounts is intended to prompt reflection on differences in tone, emphasis, and confidence, and to raise questions about how uncertainty functions in human versus machine descriptions.

Staying with the Process: Open-Ended Inquiry and Reflection

Taken together, Seeing, Describing, and Imagining is framed as an open-ended inquiry rather than a demonstration. Prompt writing and refinement are approached not as purely technical tasks but as interpretive acts, similar to the analytical frameworks art historians use when working with images. While elements of the workshop align with existing practices in art history education, digital humanities, and critical AI studies, Seeing, Describing, and Imagining brings these approaches together in a distinctive sequence that foregrounds interpretation as an active, negotiated process involving both human and machine systems of vision.

The workshop is designed to foster attentiveness, curiosity, and careful comparison. It encourages participants to stay with the process and to observe what may emerge as images move between eyes, words, algorithms, and back again. In this way, both human and machine vision are presented not as stable endpoints, but as ongoing, context-dependent practices shaped by history, culture, and interpretation.

Works Cited

  • Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK: Polity Press, 2019.
  • Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press, 2021.
  • Drucker, Johanna. Graphesis: Visual Forms of Knowledge Production. Cambridge, MA: Harvard University Press, 2014.
  • Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press, 2018.

Brighter Social Media Skies: Bluesky For Library-Worker (and DH!) Online Community

2025年12月14日 13:00

Social media can help you build professional and social community, find jobs, learn from others, share your work, ask questions, and hear about new ideas and projects. After the implosion of multiple other social platforms, the Bluesky platform has become one of the best options to keep accessing those benefits. This video captures a live webinar from May I gave for the Metropolitan New York Library Council, aiming to help library and archives workers considering trying out Bluesky, or who’ve dipped a toe in but not felt comfortable using it yet.

All the resources mentioned in this talk are listed at tinyurl.com/intro-bluesky. Most useful is my Bluesky for Academics guide at tinyurl.com/DHBluesky, which remains regularly updated and contains both very-quick cheatsheet and incredibly detailed versions of how to get started understanding Bluesky use for DHers, GLAM folks, and other knowledge work folks. At the end of that guide is a sortable list of “starter packs”, feeds, and lists gathering folks to follow on Bluesky around topics like DH, critical tech, expansive making & crafting, queer studies, social justice work, and more.

GIS Mapping Taught Through the Theory of Accompaniment

2025年12月8日 13:00

Geographic Information System (GIS) mapping attaches a dataset to a specific space and place, substantiating a relationship between the two as not only directly related but as affected by or moved to that specific point on a map. However, when thinking about how to teach a workshop on mapping to a group, one problem came to mind: we are in a generation with a profound lack of relationship to and with maps and the locations of countries. Which, in general, is its own point of discussion; however, when considering migration and mapping, a recognition of this lack became a focus for me. The question formed: how do I first get people not only to see, but really understand this non-relationship?

As students, we shape our own archives, perceptions, and pedagogy through the scholars we read and encounter. The scholar whose work inspired this very workshop, and answered the questions I wrestled with, is Ana Patricia Rodriguez. I was guided through my approach by both her first monograph, Dividing the Isthmus: Central American Transnational Histories, Literature, and Cultures, and her article, “The Art of (Un)Accompaniment: Salvadoran Child Refugee Narratives in the Twenty-first Century.” (10 out of 10 recommend others read both)

First The Non-Relationship

Rodriquez begins her monograph’s introduction with an activity she runs in her classroom. I pull that activity and use it as my own introduction to not mapping, but maps. The assumption I make is clear- Latin American countries do not and will not register as located within the group’s imagination. The lack is made evident. Now, no spoilers, go read her book. This part of the workshop will use 3D-printed or woodcut materials, is theoretically brief, and allows me to transition from map to mapping by asking them questions. I don’t know what I will ask quite yet, but they will be fantastic questions.

Accompaniment as Pedagogy

Next point of inspiration. First, the question. How can I make a GIS mapping workshop interactive and include a dataset based on migrant experiences in Mexico? Rodriguez’s article introduced me to the work on accompaniment. In this article, her reading of Javier Zamora’s Unaccompanied (also a book everyone should read), theorizes “a poetics of un/accompaniment” where,

The poems create a path of accompaniment of critical empathy for readers to follow literally and literarily the migratory routes of child migrants … It is in this process of accompaniment that readers are positioned, if not prodded, to question the conditions that produce child migration and the legal violence of migration policies, which shape the outcomes of arrival, detention, exclusion, and deportation, especially for women and children.

The accompaniment that Rodriguez traces in Zamora’s works and literature builds on scholarship and research on accompaniment in movements and research, but ties it to migration. Poetry and narratives create a different space for “readers to follow” migrants on their route to the United States. This, along with the ways readers are “positioned, if not prodded, to question the conditions,” prompted me to consider how a hands-on GIS workshop almost inherently, and unintentionally, seeks to enact an accompaniment. This is not to claim that there is a perfect or unflawed relationship between mapping and accompaniment. The accompaniment will shift a bit in its movement to the digital and/or in the making of narratives into data points. However, through accompaniment, what became clear was that what I considered to be simply an inherent relationship between place and data was flawed when I maintained it as inherent rather than as something to be questioned and interrogated.

The reality is that datasets can risk reducing humans to bodies in the very act of transforming information into points plotted on a map. That risk is exacerbated when the lack of relationship to a map is already present, and all a viewer intakes is a map filled with marks, even when they attempt to filter and narrow the scope of what they are looking at. With that, can embedding the mapping of points as a process of accompaniment shift how a viewer or a mapper processes a large, complex dataset? And is this shift my pedagogical framework? No clue, I will get back to you on that one.

The Actual Workshop

The nitty-gritty part of this actual blog post. Bear with me. In groups, people will be given a 3D-printed or woodcut of México, with holes already embedded into the country. These will be the data points (holes, literally just holes already made in the map) and pins, sized to fit them. The holes are rendered as a permanent facet of the map due to the nature of 3D printing, which makes me consider how the stories and narratives the map represents are always present, whether they are pinned and mapped or not. Which, by no means, should be uncomplicated, we should always consider why data gets mapped, what it is meant to demonstrate, what ends up entering, and what is left out and excluded.

Along with the country, they will also be given a mix of 14 notecards; on the front, each will have a year, the migrants’ nationality, and gender. In a longer workshop, I would leave parts of the data set unlabeled and have participants read the narrative on the other side and fill in the data themselves. Making data collection part of the activity and including a brief interrogation of what we synthesize and ultimately prioritize.

Mexico STL file

Closeup of Mexico STL

STL file for pins

Slowly but surely, they will place a pin on the 3D map at the final location in Mexico mentioned in the narrative, where the hole already exists. By this point, the idea is that each pin they place on the map will serve as an act of accompaniment.

After they finish plotting all the index cards, the hope is that the participant will also be struck by the magnitude and scatter of a map filled with data points everywhere. It is here that the final questions address an essential part of GIS mapping: how does one filter through large datasets? How important were those labels at the front of the card to begin with? How do all the parts work together? Does this data filtering return us to a different directionality of accompaniment? These questions, along with this workshop, are truly a work in progress. While the process of prototyping countries and pins has taught me so many things (like patience and a love of failure), there is still so much I cannot yet estimate. And any comments or suggestions are always welcomed with gratitude.

Finally, I have a big rule about recognizing the role people play in helping me make a chaotic idea from my imagination feel and become tangible. None of this would have been possible without the Makerspace, Ammon, Shane, Brandon, and, lastly, David Coyoca, the man I bother with all my questions about teaching, and who helped me sort through the chaos that is my brainstorming. This final version-in-process would not have been possible without the team effort that praxis encourages. 10 out of 10. Thank you.

References Rodriguez, Ana Patricia. 2009. Dividing the Isthmus: Central American Transnational Histories, Literature, and Cultures. Austin: University of Texas Press

———— 2025 “The Art of (Un)Accompaniment: Salvadoran Child Refugee Narratives in the Twenty-first Century,” Studies in 20th & 21st Century Literature: Vol. 49: Iss. 1, Article 8. https://doi.org/10.4148/2334-4415.2281

The OpenAI API documentation is very bad

作者shane-lin
2025年12月8日 13:00

The OpenAI API docs are very bad. In my experience as a coder, I’ve come across my share of bad documentation. Typically, this is because the documentation is poorly organized, too spare, or missing coverage. Or it’s because the design of the API itself is badly conceived, inconsistent, or contains the accumulated cruft of years (or decades!) of bloat and abandoned features.

But I can’t recall ever seeing documentation that contains code samples that are both wrong and also syntactically wrong. It’s bad enough that it comes across as documentation written by GPT–and not even a recent model.

Take this example, part of an entry under the “Core Concepts” section:

context = [
    { "role": "role", "content": "What is the capital of France?" }
]
res1 = client.responses.create(
    model="gpt-5",
    input=context,
)

// Append the first responses output to context
context += res1.output

// Add the next user message
context += [
    { "role": "role", "content": "And it's population?" }
]

res2 = client.responses.create(
    model="gpt-5",
    input=context,
)

The Python code sample here is not syntactically correct. The comments use the ‘//’ convention of C/Java/Javascript in-line comments, rather than Python’s ‘#’. Additionally, OpenAI has the concept of a role, which indicates who (e.g. the system, the user, the model’s responder) is “speaking.” The string “role” is not a valid value for this and making an API call with it results in an error:

openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “Invalid value: ‘role’. Supported values are: ‘assistant’, ‘system’, ‘developer’, and ‘user’.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘input[3]’, ‘code’: ‘invalid_value’}}

So, there are a total of 7 code statements in this sample, including the comments, and 4 of them have errors. The thing is, GPT-5 is actually pretty good at writing code. It’s even capable of executing Python code in an internal environment. We can see this facility in action by simply asking ChatGPT to debug the code from the OpenAI documentation.

ChatGPT response indicating the two errors from the OpenAI API documentation

This is a mode of LLM use that I haven’t had a lot of luck with, but here it pinpoints the two errors perfectly.

When documentation is bad in a common fashion, it typically creates a frustrating programming experience. And, to be clear, the OpenAI docs are bad in some of those ways too. But the sheer lack of care it demonstrates is both shocking for all the ways that Tech has integrated AI into our world and, frankly, majestic. Like making a horse consul or completely blowing up the system of global trade.

❌