Wednesday, November 02, 2005

Future Employees of the A&P

1. Journalism education gets a bad rap from both sides. Universities find journalism schools anti-intellectual, more akin to business schools than to law schools, but without the rich alumni to justify promoting them on campus. And the profession doesn’t have much use for formal education in journalism either. The New Yorker writer and Columbia University journalism graduate A.J. Liebling said it most famously when he wrote that “the program had ‘all the intellectual status of a training school for future employees of the A & P’” (Remnick, 2004). Because of these perceptions, a methodical study of perceptions of journalism education is needed. Since a study of the quality of a journalism education is outside the realm of this exercise, it is useful to focus instead on perceptions of the value of journalism education. And to study this, this exercise looks at a group of people who have the ability to assign a monetary value to it: prospective employers of journalists. Operationally, this group would be editors who hire and manage journalists at magazines, newspapers, and online publications. This exercise works with the following research question: How does a journalist’s education affect his desirability to prospective employers?

2. To explore the possible answers to this question, I propose the following two hypotheses, and their corresponding null hypotheses:
H1: There is a relationship between having a degree from a journalism school and level of success in a journalism career.
H0: There is no relationship between having a degree from a journalism school and level of success in a journalism career.
H2: Editors show a clear preference for hiring journalists with journalism degrees over those without journalism degrees.
H0: Editors show no clear preference for hiring journalists with journalism degrees over those without journalism degrees.
H1 tests for a simple relationship between the variables of having a journalism school degree and success in a journalism career. This is a weaker, two-tailed hypothesis. H2 looks for a directional relationship between having a journalism degree and likelihood of being hired. Though easier to reject, this is a stronger, one-tailed hypothesis.

3. Since having a journalism degree or not is a simple dichotomous measure, I will operationally define a “level of success in a journalism career,” as referred to in H1. Level of success in a journalism career can be defined as the percentage of change in a participant’s salary over the most recent continuous ten years of the participant’s employment. Since journalism careers are often volatile (and changing jobs can even be a sign of success, not failure), “continuous employment” does not necessarily mean that the participant was employed at the same publication for all ten years. The percentage change in salary represents the assumption that the journalist’s career is advancing. Freelance journalists will not be included, since, in effect, they hire themselves, and their possession of a journalism degree would not have to impress anyone to gain employment. An alternative measure of success, honors and awards received for journalistic work, was rejected because too few awards exist to measure meaningfully. Another alternative, employer satisfaction with the journalist’s work, was rejected under the assumption that higher employer satisfaction would also result in higher salaries for the journalist, which is the measure under consideration.

4. A second variable in this exercise, taken from H2, can be defined in several different ways. That variable is “editors’ preference for hiring journalists with journalism school degrees,” and can be defined as follows:

a. Dichotomous measure: editors can be asked whether or not they have ever hired a journalist who holds a journalism school degree. Answers will fall into two categories: yes, they have; or no, they have not.

b. Nominal measure: applicants for journalism jobs can be classified into (at least) four categories:
1. Journalism school graduates with a bachelor’s degree
2. Journalism school graduates with a master’s degree
3. Liberal arts graduates with a bachelor’s degree
4. Liberal arts graduates with a master’s degree

c. To establish an ordinal measure, editors can be asked to respond to the following prompt: Rank the following attributes of journalism job applicants in order (highest to lowest) of desirability:
• A journalism degree
• A liberal arts degree
• Professional experience as a journalist
• Other professional experience
• Publication history
• A journalism internship
• Writing ability as shown in published stories
• Writing ability as shown on an employer-proctored writing test

d. The ordinal measure proposed in part c. could easily be adapted to serve as a Likert-type scale. The prompt could be rephrased to read: “Rate the importance of each of the following attributes of a prospective employee on a scale of 1–7, with 1 equaling ‘not important at all,’ and 7 equaling ‘extremely important.’” Then, each of the above attributes would be listed with a Likert scale matching the prompt underneath it.
e. In order to measure the concept of editors’ perceptions of journalism degrees as valuable when hiring journalists as a ratio measure, a researcher could simply ask how many employees the editor has who hold journalism degrees. Since having four employees with journalism degrees means that an organization has twice as many such employees as an organization with only two, this is a natural ratio measure. In order to facilitate comparison of news organizations of different sizes, the researcher could express the number as a percentage of the total number of employees.
In researching H2, these five operational definitions have varying degrees of utility. The dichotomous measure, for instance, would not be of much value in finding results. If anything, it could be used to disqualify one category of editors from participation in the study if that were desired. However, the “no” category might be interesting to include in the study, and it is also covered in definition e., the ratio measure. Similarly, the nominal measure could be used to sort respondents into categories for study with one of the other measures, but is not very useful on its own. The ordinal measure in definition c. and the Likert scale in definition d. are quite similar in that they ask about the importance of various attributes to editors when they are hiring. The Likert scale would be preferable though, since the ordinal data collected in c. would also appear in d., and Likert scales have the advantage of being able to be analyzed as ratio data as argued by Labovitz (Labovitz, 1971).

5. One potential threat to the validity of H2 would result from a sampling bias. For example, if the study were to survey only editors who were themselves graduates of journalism schools, the generalizability of the results to the entire universe of editors would be reduced. While the internal validity of the study would be increased by such a limitation, there are threats to the external validity (Krathwohl, 1998). If journalism school graduate editors disproportionately favor journalism school graduates (which certainly has face validity) in comparison to other editors, the relationship of the variables would be shown to hold, even if it did were not true across the entire population of editors in the real world. This would be a Type I error. Random assignments of respondents from a sampling frame—in this case a list of editors—would reduce the effects of this sampling bias (Sudman, 1983).

These hypotheses could also be affected by a local history threat to validity. For example, if a chain of newspapers employing a number of respondents in the study were to suffer systemic financial problems and institute a wage freeze for several years, the percentage change in journalists’ salaries for those years would be zero, thus invalidating the operational definition of “success” for those journalists. The hypothesis would appear to be rejected, even though the real-world principle might still hold. This would be a Type II error. To preserve validity, those cases could be eliminated, but that might significantly reduce the sample size, thus threatening validity in other ways (Krathwohl, 1998, pp. 515, 527).


References
Krathwohl, D. R. (1998). Methods of educational and social science research: an integrated approach (Second ed.). Long Grove, Illinois: Waveland Press.
Labovitz, S. (1971). The assignment of numbers to rank order categories. American Sociological Review, 35, 515–525.
Remnick, D. (2004). Introduction: reporting it all. In Just Enough Liebling: Classic Work by the Legendary New Yorker Writer (pp. ix–xxvi). New York: North Point Press.
Sudman, S. (1983). Chapter 5: Applied sampling. In P. H. Rossi, J. D. Wright & A. B. Anderson (Eds.), Handbook of survey research. Orlando: Academic Press, Inc.

No comments: