analysts for correction. Accepted records were sent 
to a complex computer batch edit process. Each 
execution of the computer edit in batch mode 
consisted of records from only one State and flowed 
as the data were received from the NPC, the NASS 
Electronic Data Reporting (EDR) web utility, or the 
Computer-Assisted Telephone Interview (CATI) 
applications. 
The computer edit determined whether a reporting 
operation met the qualifying criteria to be counted as 
a farm (in-scope). The edit examined each in-scope 
record for reasonableness and completeness and 
determined whether to accept the recorded value for 
each data item or to take corrective action. Such 
corrective actions included removing erroneously 
reported values, replacing an unreasonable value 
with one consistent with other reported data, or 
providing a value for an overlooked item. To the 
extent possible, the computer edit determined a 
replacement value. Strategies for determining 
replacement values are discussed in the next section. 
Operations failing to meet the qualifying criteria 
were categorized as out-of-scope for the census; that 
is, they were classified as being a nonfarm. Out-of- 
scope records that NASS had reason to believe might 
be in- scope (indications of recent and/or significant 
agricultural activity reported on NASS surveys, for 
example) were referred to analysts for verification. 
The edit systematically checked reported data 
section-by-section with the overall objective of 
achieving an internally consistent and complete 
report. NASS subject-matter experts had previously 
defined the criteria for acceptable data. Problems 
that could not be resolved within the edit were 
referred to an analyst for intervention. Prior to the 
census mailout, NASS established a group of 90 
analysts in a Census Editing Unit in the National 
Operations Center in St. Louis, MO who examined 
the scanned images, consulted additional sources of 
information, and determined an appropriate action. 
Field office analysts also participated using an 
interactive version of the edit program to submit 
corrected data and immediately re-edit the record to 
ensure a satisfactory solution. 
Imputing Data 
The edit determined the best value to impute for 
reported responses that were deemed unreasonable 
A-8 APPENDIX A 
and for required responses that were absent. If an 
item could not be calculated directly from other 
current responses, the edit determined whether 
acreage, production or inventory items had been 
reported for that farm on a recent NASS crop or 
livestock survey. For operators who had not 
changed in five years, demographic variables such as 
race and sex were taken from the previous census. 
Administrative data from the Farm Service Agency 
were used for a few items, such as Conservation 
Reserve Program acreage. When deterministic edit 
logic and previously-reported data sources proved 
inadequate, data from a reporting farm of similar 
type, size, and location (a donor farm) were 
considered. In cases where automated imputation 
was unable to provide a consistent report, the record 
was referred to an analyst for resolution. 
Separate system processes were established to 
efficiently provide data from a similar farm to the 
edit when donor imputation was required. The farm 
characteristics used to define similarity between a 
recipient record and its donor record were 
determined dynamically by the edit logic. 
Euclidean distance was used for similarity 
computations, with each contributing similarity 
characteristic scaled appropriately. The most similar 
farm based on this criterion (the “nearest neighbor”) 
was identified and returned to the edit for use as a 
donor. The calculated distance between the 
centroids of the principal counties of production of 
the donor and recipient was always included as one 
of the measures of similarity. 
To provide donors to the automated edit, a pool of 
successfully edited records was maintained for each 
section of the report form. These donor pools began 
with 2007 census data, reconfigured to emulate 2012 
data and then edited using 2012 logic. Data from the 
2010 Census Content Test were similarly remapped 
and edited before being added to the original donor 
pools. As 2012 records were successfully processed, 
they were added to the donor pools, which 
maintained the most recent data for each farm. 
Donor pools were updated approximately every 
other week, as determined by edit processing 
schedules. After several updates, all initial data 
records were dropped, leaving only 2012 records in 
the donor pools. After each update, donor pool 
records were grouped into strata containing farms in 
the same state of similar type and size, using a data- 
2012 Census of Agriculture 
USDA, National Agricultural Statistics Service 
