Lecture 4: Academic Publishing & Refereeing
What makes a great empirical paper · publication process · how to write a referee report
4.1 Course objectives
- 4.1 Course objectives
- 4.2 What makes a good empirical paper?
- 4.3 The publication process
- 4.4 Referee reports
- 4.5 Discussion of Assignment II
- 4.6 Conclusion of Lecture 4
Welcome to Research in Finance
- Register for “exam” 13337 in campusonline by 30 November 2025. The registration is what binds you to the course requirements; without it you cannot submit. If you are registered but don’t submit, you receive a fail grade (5.0).
- Ask questions during or right after each session — that is the preferred channel.
- Admin / studies / exam-eligibility questions go to the registrar’s office (Studiensekretariat) at studiensekretariat@uni-ulm.de.
- Course-content questions outside class: email oliver.padmaperuma@uni-ulm.de, CC andre.guettler@uni-ulm.de.
- We also recommend the student advisory service.
Course Objective
Scope
We will:
- Prepare Master students for their empirical thesis
- Hands-on R intro for data management, visualization, cleaning, basic modelling
- Writing tips for theses, including LaTeX & Overleaf
- Referee reviews on research presentations for empirical critique skills
We will NOT:
- Deep dive into advanced stats or ML methods
- Specific finance topics (asset pricing, etc.)
- Full thesis writing / research design training
Approach
Part I — Learn the Basics
- Hands-on R intro: a widely used language for statistical computing
- Manage, visualize and clean data; run and interpret statistical models
- Solve a real empirical problem set in R, in groups
Part II — Apply your learnings
- Mandatory participation in the institute’s Brown Bag Seminar
- Two assignments (group work and individual referee report) — see Assignments / Exams
Course at a glance
Basics
Course objectives, schedule, assignments · Introduction to R · Live coding
- Course objectives, schedule and assignments
- Introduction to R and RStudio
- Live coding: variables, vectors, matrices, data frames, lists, functions, loops
- Data import and export
Data Handling & Visualization
API access, merging, cleansing, transforming and visualising financial data in R · Introduction to Overleaf
- API access (Nasdaq Data Link / Quandl, FRED, Yahoo, Coingecko, Polygon)
- Import and cleanse: read_csv, mutate, types
- Merge and append data (merge, bind_rows)
- Filter and mutate (dplyr): subset rows, derive variables
- Group by and summarise
- Pivot wide / long
- Data visualization with ggplot2 (six-step pipeline)
- Introduction to LaTeX and Overleaf
Statistical Analysis
Descriptive · inferential · modelling — applied in R
- Descriptive statistics in R
- Correlation matrix and Pearson correlation test
- t-Test and Wilcoxon test
- Shapiro-Wilk and Kolmogorov-Smirnov tests
- Linear regression with fixed effects
- Clustered standard errors
- Exporting regression tables with stargazer
- Discussion of Assignment I (Problem Set)
Academic Publishing & Refereeing
What makes a great empirical paper · publication process · how to write a referee report
- What makes a good empirical paper (contribution, identification, write-up)
- The publication process step by step
- Top finance and economics journals
- Bad outcome vs revise & resubmit
- Referee Reports — summary, major issues, minor issues
- Referee checklist (question, identification, data, econometrics, results)
- Discussion of Assignment II (Referee Report)
Brown Bag Seminar
Engage with doctoral research and prepare your referee report
- Doctoral research presentations
- Apply empirical / writing tips for the referee report
- Group discussion and Q&A
Assignments / Exams
Assignment I — Problem Set 50% of your grade
Documented .R script + PDF write-up (Overleaf)
Group of up to 5.
Submit by emailing oliver.padmaperuma@uni-ulm.de, CC andre.guettler@uni-ulm.de. Subject pattern: Research in Finance_assignment-1-problem-set_surname1_surname2_…
19 January 2026
Assignment II — Referee Report 50% of your grade
2.5–3 page referee report on a Brown-Bag presentation
Group of up to 5.
Submit by emailing oliver.padmaperuma@uni-ulm.de, CC andre.guettler@uni-ulm.de. Subject pattern: Research in Finance_assignment-2-referee-report_surname1_surname2_…
3 February 2026
4.2 What makes a good empirical paper?
- 4.1 Course objectives
- 4.2 What makes a good empirical paper?
- 4.3 The publication process
- 4.4 Referee reports
- 4.5 Discussion of Assignment II
- 4.6 Conclusion of Lecture 4
Three factors
- Contribution
- Identification
- Write-up
- Journal space is very scarce and many great papers compete for it.
- Solid identification and write-up are necessary but not sufficient for a top publication.
Notes
The three-factor framework is the lens you’ll use to evaluate every paper from now on — both your own and the ones you read. They are not equally weighted: at the top journals, contribution is the binding constraint and the most common reason for rejection; identification and write-up are necessary hygiene but rarely the decisive factor when contribution is solid.
The asymmetry is important to internalise early: a brilliantly identified, beautifully written paper on a question nobody finds interesting will not be published in a top journal. A paper on a genuinely important question with adequate (not perfect) identification and decent write-up has a chance. The implication is to spend most of your project-design time picking the right question, not optimising the regression specification on a question that doesn’t matter.
Top finance and economics journals reject 90–95 % of submissions. Even at lower-ranked journals, rejection rates are typically 70 %+. Don’t take rejections personally — the modal outcome of any submission is rejection, and the modal eventually-published paper has been rejected at least once before being accepted somewhere.
1. Contribution (I) — common rejection reasons
Often, submissions get rejected because:
- Low incremental contribution.
- Overly narrow contribution.
- Results are not very surprising.
- Results have unclear importance.
Some Reflections on Contribution (Rynes) — available on the course Moodle.
Notes
“Low incremental contribution” and “results not surprising” are the two killer rejection reasons. Both reduce to the same thing: the reader’s posterior should move when they read your paper. If they already believed your conclusion, or if your evidence is too weak to update them, the paper has not contributed.
To stress-test your own paper for this risk, try writing the abstract first and asking: “If a colleague read just this abstract, what would they learn?” If the answer is “what I already would have guessed”, the contribution is too thin. The fix is usually to sharpen the question — broader questions are easier to motivate but rarely give surprising answers; narrower, more specific questions are harder to motivate but more likely to land somewhere unexpected.
“Overly narrow contribution” is the opposite trap: a beautifully clean answer to a question only three other researchers care about. The best papers thread the needle — narrow enough to give a sharp identified answer, broad enough that a wide audience cares about the result.
1. Contribution (II) — sources of good ideas
- Activity — frequent interactions with industry experts and academic colleagues are related to the beginning of great research ideas.
- Intuition — often a feeling of excitement rather than logical analysis leads to important research leads.
- Theory — new pieces of theory or puzzles point towards something that is still poorly understood.
- Real world — research problems often have an applied flavour, tackling current real-world issues.
Notes
The four sources of good ideas, in roughly the order they tend to be most productive:
- Activity — talking with practitioners, attending seminars, reading working-paper announcements, going to conferences. Empirical questions worth answering rarely arrive while you’re sitting alone reading textbooks; they show up when someone in industry says “we’ve noticed that…” or a seminar speaker presents a result that begs a follow-up. Build the habit of attending the institute’s brown-bag seminars (the next lecture is itself a brown-bag) and other research seminars in the area.
- Intuition — when a result feels off, or a pattern in data feels surprising, follow it. Many of the best papers started as “wait, this can’t be right” reactions. The discipline is to take that reaction seriously and check whether the surprise survives careful analysis.
- Theory — open puzzles in the literature are concrete starting points. Pick a recent influential paper that proposes a mechanism; ask “is there a setting where this prediction would be especially clean to test?” or “what does this theory imply that nobody has tested yet?”.
- Real world — current events (regulatory changes, market crises, new instruments like crypto) generate natural experiments that didn’t exist before. The 2008 financial crisis, the Covid liquidity event, the rise of stablecoins, the growth of crypto futures markets are all “natural experiments” that produced waves of research.
Combining sources is most powerful — talking to a practitioner who flags a real-world phenomenon that contradicts a theoretical prediction is roughly the ideal idea-generation process.
1. Contribution (III) — timing
- Contribution of topics varies over time.
- Example: banking topics were “hot” during the financial crisis with many insolvent banks and huge public rescue plans.
- But: many researchers jump on these hot topics! By the time you finish, the wave has crested.
Notes
The “hot topic” trap is real and is a common reason competent papers fail to publish. The empirical-finance pipeline is long — typically 2–3 years from idea to acceptance — so by the time your paper on “the financial crisis and bank capital” lands at a journal, dozens of similar papers have already done so, the editor has rejection fatigue, and your incremental contribution looks small even if your craft is good.
Two implications for choosing topics:
- The window for hot topics is shorter than the publication pipeline. If a topic is already obviously hot when you start, you’re probably late. Sustainable strategy: pick topics that are just before becoming hot — early enough that you can publish before the field is saturated.
- Or pick topics that are robust to fashion. Public guarantees and bank risk-taking (Gropp, Gruendl, and Guettler 2014) is a question that’s been important for 30 years and will be important for 30 more — papers on it can be published regardless of the cyclical news cycle. Identifying timeless questions is a different (and arguably better) strategy than chasing waves.
Examples from the institute’s recent publication record: Bitcoin Ordinals & Inscriptions (a niche-but-novel topic that got into a specialist blockchain journal); Pre-Publication Revisions of Bank Financial Statements at the Journal of Financial Intermediation (a question that’s been studied for years but with new institutional data).
1. Contribution (IV) — data
- New data sets enable new research questions.
- New data sets allow new identification strategies.
- New (and usually larger) data sets allow better inference.
Notes
A new dataset is one of the highest-leverage sources of contribution. Three patterns:
- A new question becomes feasible. Until COMPUSTAT was widely available, large-sample firm-level US studies were nearly impossible. Until WRDS made detailed CRSP-Compustat-IBES merges easy, the cost of cross-source analyses was prohibitive. Crypto-market data (FTX-era OHLC, on-chain transaction histories, prediction-market resolutions) is enabling a wave of papers right now that simply couldn’t be written ten years ago.
- A new identification strategy becomes feasible. Detailed loan-level data unlocks within-borrower comparisons (same firm borrowing from multiple banks, holding firm-quarter fixed). High-frequency intraday data lets you identify around-event reactions in narrow windows.
- Better inference becomes feasible. Larger samples mean tighter confidence intervals; richer covariates allow more demanding fixed-effect saturations.
The institute’s recent pre-publication revisions paper (Guettler et al. 2024) is a clean example of this — institutional access to a dataset of bank financial statements before and after publication enabled a question (do banks revise statements that make them look worse?) that the public versions of the data alone could never answer.
A new dataset is also a moat: while you have proprietary access (or first-mover advantage in cleaning a publicly-available but messy source), you can publish multiple papers on it. WRDS levels that playing field for standardised academic datasets, but custom-collected or scraped data is increasingly common in working papers.
2. Identification
- Top publications require very robust identification.
- Don’t confuse correlation with causality.
- Read state-of-the-art empirical papers.
- Be as close to a randomized experiment as possible.
Mostly Harmless Econometrics: An Empiricist’s Companion (Angrist and Pischke 2009).
Notes
Identification is the technical core of empirical economics: what variation in the data are you using to claim a causal effect? The OLS slope on treatment measures correlation between treatment and outcome; whether that correlation is causal depends on what you’ve controlled for, what you’ve absorbed via fixed effects, and crucially what you assume about the residual variation.
The hierarchy of identification strategies, from weakest to strongest:
- Cross-sectional regression with controls. “I added 30 control variables.” Vulnerable to any omitted variable correlated with both treatment and outcome.
- Panel data with fixed effects. “I absorbed all time-invariant firm characteristics with firm FE.” Better — handles a whole class of confounders — but still vulnerable to time-varying omitted variables.
- Difference-in-differences. “Treatment and control groups had parallel trends before the policy change; the divergence after is the causal effect.” Strong if the parallel-trends assumption is defensible.
- Instrumental variables. “I have a variable that affects treatment but not outcome directly.” Strong but requires the exclusion restriction to be defensible — often the hardest part.
- Regression discontinuity. “Just-eligible vs just-ineligible firms are nearly identical except for treatment status.” Among the strongest in cross-sectional finance work.
- Randomised experiments. “We literally randomised treatment assignment.” The gold standard — rare in finance because randomisation is impractical for most market interventions, but increasingly common in field experiments with institutions.
Aim as high in this hierarchy as the data allows. Mostly Harmless Econometrics (Angrist and Pischke 2009) is the canonical reference for understanding when each strategy works and when it fails. The Gropp–Gruendl–Guettler natural experiment on public guarantees (Gropp, Gruendl, and Guettler 2014) is a good institute-internal example of using exogenous variation in policy assignment to identify a causal effect.
3. Write-up
- Awesome: you put together a set of results!
- The write-up should be at least as important as the econometrics.
- Be prepared to rewrite the paper many times.
- Title / Abstract / Introduction are the most important parts.
Writing Tips for Ph.D. Students (Cochrane 2005) — available on the course Moodle.
Notes
“The write-up should be at least as important as the econometrics” is the single most underweighted lesson for new researchers. Editors and referees read your title, abstract, and introduction first; they read the full paper only if those convince them to invest the time. A brilliant identification buried in a bad introduction will not get read carefully.
The funnel of attention in academic publishing:
- Title — read by everyone who sees the table of contents. Should communicate the question and approach in 8–12 words. Avoid cute / clever titles for empirical papers; clarity wins.
- Abstract — read by everyone who finds the title interesting. Three sentences: what you ask, what you do, what you find. Memorise the abstracts of three top papers in your area; the structural template is universal.
- Introduction — read by everyone who finds the abstract interesting. The intro is doing 90 % of the work in convincing the editor and referees that the paper is worth their time. Spend disproportionate effort here.
- Tables and figures — readers skim these next. Each table should have a self-contained caption (a reader who only looks at the captions and tables should grasp the paper).
- Body — read by the small set who care enough to dig in.
- Footnotes and appendices — for the truly committed.
Cochrane’s writing tips (Cochrane 2005) is a 30-page document that has shaped a generation of empirical-finance papers; read it once a year for the rest of your research career. The single most repeated rule: write the abstract first, then write the paper to match it. If you can’t write the abstract, the paper isn’t ready to write.
Be prepared to revise the introduction many, many times — first-draft introductions are almost never any good; tenth-draft introductions are usually excellent.
4.3 The publication process
- 4.1 Course objectives
- 4.2 What makes a good empirical paper?
- 4.3 The publication process
- 4.4 Referee reports
- 4.5 Discussion of Assignment II
- 4.6 Conclusion of Lecture 4
Be patient — but learn now
You have to learn to be patient. — Learn? I want to be patient now!
Anonymous PhD student
Notes
Most students starting graduate work underestimate the publication timeline by years, not months. A realistic schedule from “I have an idea” to “the paper is in print at a top-5 journal”:
- Idea + literature review + initial data work: 6–12 months.
- First working paper, internal feedback, conference presentations: 6–12 months.
- Submission, first-round decision (R&R or rejection): 3–9 months at top finance journals; longer at top economics journals.
- One or two rounds of R&R, each 6–18 months: 12–36 months total.
- Acceptance, proofs, online-first publication: 3–6 months.
- Print issue: another 6–12 months depending on journal backlog.
That’s typically 3–5 years from initial idea to print, with several rejections somewhere along the way. Even working papers and conference presentations bring impact much faster — for a doctoral candidate, a paper that’s at “second-round R&R at a top journal” is usually treated as job-market-ready, even if the print issue is still years away.
The actionable lesson: start the publication clock running early. Working papers count for citations; conference presentations count for visibility; rejections count as progress (you’re learning what doesn’t work). Patience is required because the calendar is long; activity is required because the calendar runs whether you’re working or not.
Step 1 — find the question
- What are you really interested in?
- You need to be very (!) interested in the topic — you’ll invest a lot of time in it over the next years.
- Read relevant and current literature.
- Nail down a specific contribution to the existing literature.
- Concentrate on one central research question.
Notes
“Be very interested” is not motivational fluff — it’s a working condition. A multi-year project on a question you’re lukewarm about will either drag (you procrastinate) or get abandoned (when a more interesting topic appears) or get resented (which shows in the writing). Pick a question that you would still want to answer if no one paid you to.
The “one central research question” rule matters more than it sounds. New researchers routinely write proposals that contain three loosely-related questions and end up with a paper that does none of them well. Editors and referees can only retain one question per paper; if your paper makes them juggle three, they’ll hold the easiest one and reject for incremental contribution on the others. Pick the sharpest, narrowest, most-likely-to-yield-a-clean-answer question and structure the entire paper around it. You can always write the second paper later.
Reading the literature thoroughly is non-negotiable but limited in value: literature search tells you what has been done, not what should be done next. The questions that matter most are usually the ones you arrive at by combining something you read with something you observed in data or in practice — not by sitting in front of Google Scholar.
Step 2 — write a research proposal
- Compose a one-page write-up = research proposal.
- Concentrate on the first two main factors that make a good paper:
- Contribution
- Identification
- Is it feasible (do you have access to specific data, etc.)?
Notes
The one-page research proposal is a forcing function — anything that doesn’t fit on one page either isn’t sharp enough or contains scope creep. A well-formed proposal has four paragraphs:
- The question — what specifically do you want to answer, and why does the answer matter?
- The contribution — what’s the closest existing paper, and how does yours go further (sharper question, better identification, new data, novel mechanism)?
- The identification strategy — what variation in the data lets you make a causal claim? Treat it like a regression specification — be explicit about the controls, fixed effects, and clustering you’ll use.
- Feasibility — do you have the data, the access (WRDS, central-bank archives, proprietary partners), the computational resources? If a critical resource is uncertain, the proposal isn’t ready to commit to.
A sharp one-pager is the document you’ll send to potential coauthors and supervisors next. Vague one-pagers get vague feedback; concrete ones get concrete redirection. Spend the time getting it tight before circulating.
Step 4 — discuss the proposal
- Discuss your research proposal with supervisors and other suitable people (fellow students, industry experts).
- Two outcomes:
- Great response — adjust proposal and move forward.
- Not good enough — either adjust thoroughly or stop the project.
- This early you can still stop the project without too much time lost.
- The higher your goals, the stricter you need to be on what projects you undertake.
Notes
Killing a project early is a high-value skill that takes practice. Sunk-cost fallacy is powerful — once you’ve spent two months on something, the urge to keep going just to “make use of the work” is strong, even when an honest assessment says the paper won’t land where you need it.
A good heuristic for a stop / continue decision: imagine the paper finished and submitted. What’s the realistic best-case journal? Is that journal good enough for your career goals — for a doctoral student, “would this be among my top three papers when I go on the job market”? If the honest answer is no, stopping is the right call. The two months you saved go into a better project.
The flip side: don’t kill projects too early either. The first round of “this isn’t working” is often a temporary obstacle that yields to a better identification strategy or a new dataset. Stop only after you’ve genuinely tried two or three approaches and none of them produced sharp enough results.
Run the stop / continue conversation with your supervisor explicitly — they have far more pattern-matching on which projects pay off. The institute brown-bag seminar (the next lecture is one) is a useful forcing function for these conversations.
Step 5 — collect data
- Collect.
- Structure.
- Clean — distributional graphs (outliers?), missing data.
- Descriptive analysis (univariate statistics).
Notes
Data work is famously the part of empirical research that nobody warned you would consume 70 % of your time on a paper. The four bullets here map onto stages that always take longer than expected:
- Collect — for archival data (WRDS, FRED, BIS, central-bank repositories), this is “find and download” but the access negotiation alone can take weeks. For proprietary data (a partnership with a bank or a regulator), it can take months. For scraped or constructed data, it’s an entire project of its own.
- Structure — convert the raw extracts into the long-format panel your analysis needs. This involves choosing the unit of observation (firm-year, firm-quarter, loan-month), reconciling identifier conventions across sources, and resolving inevitable schema changes over time.
- Clean — handle missing values explicitly (delete? impute? flag?), winsorise outliers (typical convention: at the 1 % level for accounting variables, no winsorising for prices), document every choice. Write a
data_cleaning.Rscript and document each line — six months later you will not remember why you dropped firms with negative book equity. - Descriptive analysis — Lecture 3’s Table 1. This is also the moment to catch data errors: an asset whose returns max out at 10x normal levels probably has a corporate-action error; a country whose unemployment rate is exactly zero probably has a missing-as-zero coding bug.
Build the cleaning pipeline so it’s reproducible from raw inputs: raw_data/ → cleaning_script.R → analysis_data.rds. Six months later when a referee asks “what happens if you change the winsorisation threshold to 0.5 %”, changing one constant in the script and re-running should be a 5-minute task, not a week of recovering institutional memory.
Step 6 — empirical analysis
- Run empirical analyses according to identification strategy.
- Conduct robustness checks.
- Put together results according to your proposal’s story line.
- Discuss results with supervisor and others.
- You need very original (even surprising) and robust results for a top publication (less important for a working-paper version for the dissertation).
Notes
Robustness checks are the empirical paper’s immune system — they are how you (and the referees) test whether the headline result is real or a fragile artefact of one specific specification.
A typical robustness battery for an empirical-finance paper:
- Different control sets — drop the most-likely-confounded controls one at a time; the headline coefficient should remain stable in sign and significance.
- Different fixed-effect levels — show the result with and without higher-order FE saturation. Stability across saturations is reassuring; collapse-on-saturation is a flag.
- Sub-sample analyses — split by a meaningful margin (pre/post a regulatory event; large vs small firms; different sectors). The mechanism should hold in the subsamples where it should hold and disappear where it shouldn’t.
- Alternative outcome / treatment definitions — if you measured the dependent variable in a particular way, show it the other plausible way. If the result depends on a specific definition, that’s a finding to report transparently, not hide.
- Alternative inference — Newey-West vs. clustered SEs; bootstrap vs. asymptotic.
The paper should report the headline result in Table 2 (or wherever) and the robustness battery in subsequent tables / appendix. Referees will run the robustness check you didn’t include first; pre-empt them.
The “very original (even surprising)” rule is a top-journal threshold that matters more than craft: an extremely well-executed paper that confirms what everyone believed will struggle at a top-5; a paper with adequate identification on a result that genuinely surprises will sail. Originality is what gets the editor’s attention; craft is what survives the referees.
Step 7 — first working paper
- Write down first working paper version.
- Get feedback from supervisor and others.
- It’s a tough world: you’ll realise that few people, if any at all, read your paper.
- Offer to read others’ papers — and you’ll know more people who read yours.
Notes
The first working paper version is intentionally rough — it’s the artefact that lets you ask people for feedback. SSRN is the standard repository for finance working papers; posting establishes a date stamp and makes the paper citable.
The “few people read your paper” point is initially deflating but quickly becomes liberating: if the audience is small, you can iterate quickly and make substantive changes between revisions without anyone having committed to your earlier version. The reciprocity loop — read others’ papers carefully, get yours read in return — is how academic communities actually work. The institute’s brown-bag seminar (next lecture) is where this happens locally.
Step 8 — present and revise
- Present the paper (at internal brown bag seminar).
- Adjust paper.
- Yes, you need to invest a lot of time revising!
- Invest some money for professional editing — most of us aren’t native English speakers.
Notes
Presenting at the institute’s brown-bag is the first hard test — a friendly audience that nevertheless asks the questions a referee will eventually ask. Walking out of a brown-bag with three concrete revision items is a successful presentation; walking out with no questions usually means the framing didn’t land.
Professional editing is high-leverage for non-native speakers. Editors at top journals notice consistent fluency issues; a paper that reads as polished from the abstract onward gets a more generous read. Cost is modest (a few hundred euros for a working paper); the value is much higher.
Step 9 — submit to conferences
- Submit paper to good finance conferences.
- Good signal to have paper accepted at top finance conferences (WFA, AFA, EFA, FIRS) since people know it
- for international job market (ASSA);
- for later publication at top journals.
- Time-consuming because conferences take place many months after the submission deadline.
Notes
Conference acceptance functions as a quality signal that journal editors and job-market committees both read. The big four in finance — WFA (Western Finance Association), AFA (American Finance Association), EFA (European Finance Association), FIRS (Financial Intermediation Research Society) — are highly competitive (acceptance rates ~10–15 %) and an acceptance is genuinely impressive on a CV.
Practical timeline: WFA’s June meeting has a January deadline; EFA’s August meeting has a February deadline. AFA is January with an early-summer deadline, often coordinating with the ASSA job market. Plan submissions backwards from these dates. Mid-tier conferences (regional finance associations, central-bank research conferences) are easier to enter and useful for getting feedback on early-stage work; they don’t carry the same CV signal but they’re real and worth pursuing.
Step 10 — choose the right journal
- Which journal fits your paper?
- Which journals do you cite? Most relevant publications signal a suitable journal.
- Try to aim higher than expected at the beginning. Top-5 finance journals usually need ≤ 3 months for first-round decisions; econ journals often longer.
- Strategic citations: who are likely referees? Don’t miss citing papers from authors at target journals; check typos in author names.
Notes
Journal selection is a pragmatic optimisation under uncertainty. Two failure modes:
- Aiming too low — submitting to a B-tier when an A-tier was realistic costs you a tier of citations, visibility, and (for early-career researchers) tenure-track impact. Accepting an A-tier rejection is much better than never trying.
- Aiming too high — submitting to an A-tier with a paper that obviously belongs at A− costs you 6+ months of waiting for a desk-rejection or first-round rejection. Time is the scarce resource in early career.
A useful heuristic: identify the three closest published papers to yours; check where they were published; submit to the same journal (or one tier higher if your contribution is sharper). The “where do papers in my conversation get published” signal is more reliable than journal-impact-factor lists.
Strategic citation matters for two reasons: (a) referees are usually drawn from the cited authors’ networks, so missing a relevant citation can mean missing an obvious referee — never a good signal; (b) a paper that doesn’t cite the editor’s own most-relevant work suggests you didn’t read carefully, which is a small but real desk-rejection risk. Read the journal’s recent publications in your area and cite the relevant ones.
Top finance journals
- Journal of Finance
- Review of Financial Studies
- Journal of Financial Economics
- Review of Finance
- Journal of Financial and Quantitative Analysis
- Management Science
- Journal of Banking and Finance
- Journal of Financial Intermediation
- Journal of Money, Credit and Banking
- Journal of Empirical Finance
- Journal of Financial Services Research
- The B list is not comprehensive — those listed are most relevant for Banking, Financial Intermediation, and general Finance.
- Country-specific differences in journal perception.
- Steady increase of new journals — including very good ones (e.g., RFS sub-journals for corporate finance and asset pricing) but mostly journals that are not read.
- Open-source journals can be a good alternative (faster, cheaper) but often not in rankings.
Top econ journals (Handelsblatt ranking)
- American Economic Review
- Econometrica
- Journal of Political Economy
- Quarterly Journal of Economics
- Review of Economic Studies
- Bell Journal of Economics
- Econometric Theory
- European Economic Review
- Games and Economic Behavior
- International Economic Review
- Journal of Business and Economic Statistics
- Journal of Econometrics
- Journal of Economic Theory
- Journal of Finance
- Journal of International Economics
- Journal of Labor Economics
- Journal of Monetary Economics
- Journal of Public Economics
- Journal of the American Statistical Association
- Journal of the European Economic Association
- Rand Journal of Economics
- Review of Economics and Statistics
Top 10 journals — Eigenfactor Scores
| Journal | Eigenfactor |
|---|---|
| American Economic Review | 0.1014 |
| Journal of Finance | 0.0614 |
| Journal of Financial Economics | 0.0534 |
| Quarterly Journal of Economics | 0.0476 |
| Review of Financial Studies | 0.0475 |
| Econometrica | 0.0461 |
| Journal of Econometrics | 0.0377 |
| Journal of Political Economy | 0.0364 |
| Review of Economic Studies | 0.0328 |
| Review of Economics and Statistics | 0.0289 |
Source: JCR, Year and Edition: 2010 Social Science.
Bad outcome — rejection
- Your paper is rejected.
- Don’t be upset and don’t take it personal!
- Rejection rates are very high at top journals.
- The average top paper collects several rejections as well!
- Try to identify suggestions that are feasible and justified, do additional analyses, rewrite, submit to the next suitable journal.
- Nice editors might recommend a suitable journal.
- Sometimes you get a crappy report that is not helpful at all — just submit to the next journal.
Notes
Statistical reality: at top finance journals, ~85–90 % of submissions are rejected. The expected number of rejections before publication for a paper that eventually gets into a top journal is two or more. Reject is the modal outcome, even for excellent papers. Rejections are not a signal that the paper is bad; they’re a signal that journal slots are scarce.
When a rejection lands:
- Don’t reply for 24 hours. The instinctive response is defensive; it will not improve the paper.
- Read the report carefully and separate substantive from procedural criticisms. Substantive criticisms (the identification has a problem, the literature framing missed a key strand, the result doesn’t survive a relevant robustness check) should drive revisions. Procedural (“the writing is unclear in section 3”) usually points to weakness somewhere — fix it.
- Identify which feedback is feasible and worth incorporating before resubmitting elsewhere. A paper that’s genuinely improved between submissions has a much better chance at the next journal.
- Pick the next journal by citing literature and contribution. Going down one tier (A → A−) is the typical strategy after a top-5 rejection; going to a similar-tier specialist journal is also common.
- Sometimes the report is unhelpful or the editor missed the point. It happens. Move on; don’t waste cycles arguing with a journal that’s already said no.
Good outcome (I) — Revise & Resubmit
- Your paper received a revise & resubmit.
- Congratulations! You made it into the next round.
- Strategy: address as many of the referees’ suggestions as possible.
- These guys are the gatekeepers!
- Do the additional tests and pray that your main results hold!
Notes
An R&R is not yet acceptance — it’s an invitation to attempt one more round, with a meaningful (though not guaranteed) probability of acceptance at the end. Conversion rates from R&R to acceptance vary by journal but are typically 30–60 % for top journals; better than the cold-submission rate but still uncertain.
Strategy in an R&R round:
- Address every comment, even the trivial ones. Skipping a referee point invites the question “why didn’t they do this?” in the next round.
- The editor’s letter is binding; the referees’ letters are advisory. If the editor highlights specific concerns, those are first-priority. If the referees disagree among themselves, the editor’s framing of the disagreement tells you which direction to lean.
- The hardest moment is when an additional test the referee asks for would, you suspect, undermine your main result. Run it anyway, transparently. If the result still holds: even stronger paper. If it doesn’t: better to find out now than have it published and retracted.
Good outcome (II) — write the response
- Rewrite your paper and put together a response to the referee(s).
- Copy their comments into the response.
- Add your response after each issue.
- Link to new tables in the paper.
- Most often you cannot put all the new results into the paper.
- Put these tables into the report to the referee(s) — referees like you being explicit and transparent about new results!
Notes
The response document is its own genre and is graded as carefully as the paper itself. A good response document is structured as a numbered list mirroring the referee’s comments verbatim, with your response immediately under each. Conventions:
- Quote the referee’s comment in full in italics, then your response in plain text. This makes it trivial for the editor and referee to verify each item was addressed without flipping pages.
- Add page / table references to where the change is in the revised paper (“This is now discussed in §3.2 on p. 12 and reflected in Table 4.”).
- Auxiliary tables that don’t fit in the main paper go into the response document as in-line tables. Referees appreciate not having to ask twice for results that you ran but didn’t publish.
- Be polite and specific even when the comment is wrong. “We disagree with the referee’s interpretation; the proposed alternative would …” is acceptable; “the referee misunderstood” is not.
A polished response document signals that you take the revision seriously — which is itself a positive signal for the next round.
Good outcome (III) — disagreements
- A major R&R can be as time-consuming as writing the first version.
- You may not always agree with some of the referee’s suggestions.
- You should still try to address them in a polite way.
- It can be risky to argue with a referee.
- If the editor highlights some issues: put extra effort into addressing them. At the end, the editor decides which paper gets published!
Notes
Picking when to push back vs. when to acquiesce is a judgement call that improves with practice. A useful test: would a third referee, reading both your response and the original critique, be persuaded that you’re right? If yes, push back politely. If your defence relies on “but the referee just doesn’t get it”, acquiesce.
Some pushback patterns that work:
- “We considered this approach but did not adopt it because [specific technical reason]; we have added a footnote on p. 8 explaining the choice.” Acknowledges the suggestion, justifies the alternative, leaves a paper trail.
- “The referee asks for X; we have done Y, which is the closest feasible analog given [data constraint]. Both yield the same conclusion.” Offers a partial compromise.
Patterns that don’t work: ignoring the comment, dismissing it without justification, or arguing at length without doing the requested test. Even if you don’t run the test, write a sentence in the paper explaining why not.
The editor’s letter is the binding ranking — if the editor flagged three concerns and the referees flagged ten, the three are first-priority. Editor pushback is much more dangerous than referee pushback, because the editor decides.
Acceptance
- After acceptance, the journal requests the Word/TeX files and original files for graphs.
- You’ll get a proof that needs to be cross-checked.
- Usually your paper appears online at the journal’s website before the print publication (you can cite by journal name & forthcoming).
- If you don’t receive the proof or don’t see your paper online soon after returning the proof — write the journal and ask what is going on!
- Often you also need to provide your code and data.
Notes
Acceptance is the moment but it triggers more work, not less. The proof stage is high-stakes — typesetting errors at this point persist into the printed version. Read the proof carefully end to end; check every number against your source files (typesetting frequently introduces transcription errors in tables); confirm figure captions match the figures.
The push toward code and data submission is recent (last 5–10 years) and is now standard at most top journals. Plan for this from the start: clean your code, document the data sources, ensure the analysis script can run end-to-end on a fresh machine. The Journal of Finance, Review of Financial Studies, and Journal of Financial Economics all have public replication-package archives. Once your code is published, anyone can re-run it — pre-empt this by running it yourself before submission.
The “online first → print” gap is real (often 6–18 months). Citing your own paper as “Smith and Jones (forthcoming, Journal of Finance)” is acceptable practice during this period. Once the print issue lands, switch to the standard citation format with volume and pages.
4.4 Referee reports
- 4.1 Course objectives
- 4.2 What makes a good empirical paper?
- 4.3 The publication process
- 4.4 Referee reports
- 4.5 Discussion of Assignment II
- 4.6 Conclusion of Lecture 4
How to assess whether a paper is good or bad
The structure of a referee report:
- Summary
- Major issues
- Minor issues
Notes
The three-section structure is universal in finance-economics refereeing. It serves two purposes: (a) it gives the editor a fast read on the paper’s strengths and weaknesses without reading the report end-to-end; (b) it forces the referee to separate “is this paper publishable?” (major issues) from “what details would polish it?” (minor issues).
The referee’s report is for the editor first, the authors second. The editor uses your report to decide what to do with the paper; the authors use it to revise. A good report makes both jobs easy.
Writing referee reports is the other side of the publication process you’ll be on as your career develops. The skill — reading a paper, identifying its central claim, evaluating whether the evidence supports the claim, and formulating constructive feedback — is the same skill you use to evaluate your own papers. Writing referee reports is one of the highest-leverage ways to learn what makes a publishable paper.
Referee report — 1. Summary
Write a short summary of the paper using your own words:
- What is the question asked by the author?
- What is the identification strategy?
- What data is used?
- How is the hypothesis formulated and tested?
- What are the results?
The purpose of this section is to summarise the paper for the editor in a way that lets him understand the essence of the paper and its contribution, without having to read it.
Notes
The summary is a one-paragraph statement of what the paper does, in your own words. It is harder than it sounds — most first-draft summaries are too long, too detailed, or too close to the paper’s own framing.
The test of a good summary: an editor who has not read the paper can answer “what’s the question? what’s the data? what’s the headline result?” from your summary alone. If they can’t, your summary is incomplete or unclear.
A useful template:
- Sentence 1: the question, in plain language.
- Sentence 2: the data + identification strategy (one phrase each).
- Sentence 3: the headline result.
- Optional sentence 4: the contribution (compared to the closest existing paper).
Writing the summary in your own words (not paraphrasing the abstract) is essential — it forces you to understand the paper, not just transcribe its self-presentation. If you find you can’t write the summary without copying the abstract, that’s a flag that you need to read the paper more carefully before forming an opinion.
Referee report — 2. Major issues (I)
- Take 3 or 4 major negative (or positive) points that you have on the paper, one at a time.
- To do this, check carefully: the question, the theory/model, the link to the empirical analysis, the presentation of the data, the econometric analysis, and the results.
- Below is a checklist of the kinds of questions you should ask yourself.
Notes
“Major” issues are the ones that determine your recommendation (accept / R&R / reject). Three or four is the right number — fewer suggests you didn’t engage deeply; more dilutes the editor’s ability to weigh which issues are decisive.
Each major issue should be:
- Substantive — about the paper’s core claim, identification, or contribution. Not about typesetting.
- Specific — point to the section, table, or claim you’re objecting to. “Section 4.2 estimates a fixed-effect model but doesn’t cluster standard errors at the firm level” is actionable; “the econometrics is weak” is not.
- Constructive where possible — suggest a specific test or analysis the authors could run to address the concern. Even if you ultimately recommend rejection, the comments should point the authors toward a better paper.
Both negative and positive points count — saying “the question is novel and important” tells the editor that the paper has merit even if the execution has problems. The editor weighs both.
Referee report — 2. Major issues (II)
- For a positive point, argue why the question is particularly important, the approach novel, the techniques new, the identification strategy innovative, the data unusual, etc.
- For a negative point, you are often looking for lack of correspondence between:
- the idea and the model,
- the model and the empiricism,
- the empirical strategy and the conclusion.
Notes
The “lack of correspondence” frame is the highest-leverage critical lens for empirical papers. Most flawed papers don’t have a single broken piece — they have a gap between what they claim and what they actually demonstrate. Three common gaps:
- Idea ↔︎ model — the introduction motivates X, but the formal model only addresses Y. (“The paper claims to study how regulation affects bank risk-taking, but the model is about the principal-agent problem between depositors and managers — not about regulation at all.”)
- Model ↔︎ empirics — the model predicts a specific functional form or sign, but the regression is a generic OLS that doesn’t test the prediction. (“The model predicts a non-monotonic relationship, but Table 3 estimates a linear specification that can’t possibly detect non-monotonicity.”)
- Empirics ↔︎ conclusion — the regression result is statistically significant on a small effect, but the conclusion paragraph claims a large economic effect. (“The estimated coefficient is 0.001 and statistically significant; the conclusion says ‘firms substantially adjust their financing in response to taxation’ — the magnitude doesn’t support that claim.”)
When you find one of these gaps, that’s almost always the major issue worth leading with. It’s also the issue authors find easiest to address — usually they didn’t realise the gap existed and welcome the prompt to either fix the framing or fix the empirics.
Referee report — 2. Major issues (III)
- Another argument for rejecting a paper is when the paper has nothing wrong but is boring and not new in any way. If this is one of your points, refer to other works to show why this is all well known and already done.
- Your main job:
- Find the most related papers and check whether the paper you assess is better / worse than the existing literature.
- Often, there are only a few very related papers.
Notes
Boring-but-correct papers are a special case worth knowing how to evaluate. They have nothing wrong technically but don’t move the literature forward — usually because the question has been answered already, or because the data adds nothing genuinely new, or because the result is what everyone already believed.
When recommending rejection on this ground, cite the specific prior work that already established what this paper claims. “Smith (2018, JF) shows essentially this result for a similar sample; the present paper’s incremental contribution would be to do X, but it does Y instead, which is not new.” That’s a defensible critique that the editor can act on; “this paper isn’t very interesting” is not.
The opposite mistake — finding a paper boring because you already know the answer, when in fact the broader profession doesn’t — is also a real failure mode. Reading the closest 3–5 papers carefully before writing the report is the only way to calibrate.
Referee report — 3. Minor issues
Usually, if you have major criticisms about a paper that lead you to recommend rejection, you don’t even need to do a section on less important issues.
However, hopefully the papers you’ll be reading are not so bad — and you may have some less important though useful suggestions to improve the paper.
Notes
Minor issues are presentation-level (a poorly worded sentence, a missing citation, a confusing figure label, a typo in a table) rather than substantive (the identification has a problem). Conventionally, list them as a numbered list with page or section references.
Two cautions:
- Don’t bury major issues in the minor list. If something matters for the recommendation, it belongs in the major section. The editor reads the major section to decide; minor issues are taken into account only conditional on the paper proceeding.
- Don’t write minor issues for a paper you’re rejecting. A rejected paper doesn’t get revised in the way an R&R does, so investing time on stylistic suggestions is wasted effort. Brief major-issue letters are appropriate for clear rejections.
For an R&R or accept, a thorough minor-issues list is genuinely valued — it shows the authors that you read carefully and gives them a clear path to a polished final version.
Referee checklist (I) — the question & identification
- Is the topic clearly explained? Could the question be made more precise?
- Does the author do a good job of motivating the question in the introduction?
- Is the answer to the question obvious in advance?
- Is the question original? What is the contribution of the paper? Does the author pose a question of reasonable scope?
- Is the identification strategy clearly explained, including the source of variation (e.g., fixed effects, instruments)?
- Does the author address endogeneity concerns (reverse causality, omitted variables)?
- Are assumptions about error terms justified (uncorrelated with regressors)?
- Is the strategy robust to alternatives (different controls, specifications)?
- Does it convincingly identify causal effects rather than mere correlations?
Notes
The two checklists on this slide are the substantive backbone of any major-issues section. Most paper rejections trace to a problem in one of these two areas, and most R&Rs at top journals require the authors to address concerns from these two columns.
For the question: the editor wants to know “is this worth publishing?” The right test is what would the literature lose if this paper didn’t exist? If the answer is “very little” — the question has been answered, or the answer is uninteresting — that’s a contribution problem.
For identification: the test is what causal claim does the paper make, and is the variation in the data sufficient to support it? Common failure modes: - Endogeneity treated as a footnote rather than a central concern. - Fixed effects waved at without explaining what they absorb. - Strong claims supported by weak instruments. - Selection bias in the sample acknowledged but not corrected.
When you’re refereeing, force yourself to write down the first-order identification concern explicitly: “the central regression suffers from omitted variable X, which would bias the estimate up/down/an unknown direction”. Then evaluate whether the paper addresses it. Many otherwise-publishable papers fail at this step because authors got too close to their data to see the obvious confounder.
Referee checklist (II) — the data
- Does the author present a clear description of the data?
- Does the author’s choice of dataset seem well-suited to answering the question?
- If you had to replicate the author’s study five years from now, is there sufficient information about the source of the data and the sample used?
- Does the author discuss issues that may affect the estimation strategy: random sample? known sources of measurement error? cross-sectional dependence in panels?
- Does the author present summary statistics, and make good use of them to motivate the question or specific aspects of the analysis?
Notes
Data evaluation is largely about transparency and replicability:
- Source clarity — could you, as a third party with appropriate access, recreate the dataset from the description alone? If WRDS, which database and which extract date? If a proprietary source, which version and what coverage?
- Sample selection — explicit about who’s in and who’s out. “We start with all firms in CRSP, then drop financials, then drop firms with negative book equity, leaving N firms” — versus “our sample of N firms” with no derivation.
- Measurement-error acknowledgement — for self-reported data, survey data, or noisy proxies (analyst forecasts as proxies for expectations), the paper should engage with what the noise does to the estimates.
- Cross-sectional vs panel structure — for panel data, are the within / between dimensions distinguished? Does the variation actually used to identify the effect come from the right dimension?
Summary statistics are often perfunctory (“Table 1 reports descriptive statistics”) but the best papers use them strategically: a Table 1 panel showing pre-treatment balance between treatment and control groups is the visual evidence for an identification claim. A correlation matrix can flag multicollinearity that the regression specification needs to handle. A frequency table can reveal that 80 % of observations are concentrated in 10 % of the sample’s range.
When refereeing, ask: “given the description, could I re-run this analysis?” If the answer is no, that’s a transparency issue worth flagging.
Referee checklist (III) — econometric analysis
- Are the econometric techniques well-suited to the problem at hand?
- What are the properties of the estimators employed? Are issues regarding these properties adequately addressed?
- Is the econometric analysis carefully done and reported?
- Have alternative specifications been tried and compared, when necessary?
- Is the issue of robustness of the results addressed?
- What test statistics does the author employ? Do they answer the question?
Notes
Econometrics critique is the most technical part of the report and the area where students new to refereeing often feel least confident. A reasonable triage:
- Is the model well-suited to the question? A linear probability model on a binary outcome with predicted values outside [0,1] is a flag; using OLS on count data is a flag; running a panel regression that ignores within-group correlation is a flag.
- Are the standard errors right? Cluster levels appropriate to the data structure; HC robust SEs when heteroskedasticity is plausible; bootstrap when the asymptotic approximation is unreliable.
- Are the results robust to specification choices? The paper should show two or three alternative specifications (different controls, different fixed-effect saturations) and the headline coefficient should be stable across them. If only one specification works, that’s evidence the result is fragile.
- What test does the paper actually conduct? Sometimes the headline claim is about an effect being present; the paper estimates a coefficient and reports its significance — that’s the right test. Sometimes the claim is comparative (“X has a larger effect than Y”) and the paper reports two separate coefficients without testing the difference — that’s the wrong test. The reported test should map onto the claim made.
When in doubt, you can phrase econometric concerns as questions: “How sensitive is the estimate to dropping the largest 5 % of observations?” or “Have the authors considered estimating with two-way clustered standard errors?” That’s both polite and constructive.
Referee checklist (IV) — results & conclusion
- Are the results clearly stated and presented?
- Are they used in some interesting way (beyond quoting the value of the parameters and their standard errors)?
- Are the results related back to the question?
- Are appropriate caveats mentioned?
- Do the conclusions concisely summarise the main points of the paper?
- Are the conclusions well-supported by the evidence?
- Are you convinced? What did you learn from this paper?
Notes
The results section is where the paper either delivers on its promise or doesn’t. Specific things to check:
- Magnitudes are explicitly interpreted. A coefficient of 0.05 means nothing without context; “a 1-SD increase in regulation is associated with a 0.05 × SD = 1.2 percentage-point decrease in lending growth, equivalent to N billion euros over the sample period” tells the reader what the result means in their world.
- The headline result connects back to the question. If the question is about lending behaviour, the headline result should be a lending coefficient, not a tangential auxiliary regression buried in Table 5.
- Caveats are in the paper, not just in the conclusion. “We cannot rule out X” should appear when X first becomes relevant, not as a final-page hedge.
- The conclusion summarises faithfully. If Table 4 shows a marginally significant result (p = 0.04) for a small effect, the conclusion should not say “the paper provides strong evidence for X”. Calibrated language is what distinguishes credible empirical work.
The “are you convinced?” test is the right closing prompt. If at the end of reading you find yourself thinking “I would have done X, Y, Z differently”, those are the major-issues bullets. If you finish and think “this is solid, the result is real, I would teach this”, you have an accept recommendation.
4.5 Discussion of Assignment II
- 4.1 Course objectives
- 4.2 What makes a good empirical paper?
- 4.3 The publication process
- 4.4 Referee reports
- 4.5 Discussion of Assignment II
- 4.6 Conclusion of Lecture 4
Assignment II — Create a referee report
- Attend the mandatory Brown Bag Seminar on 20 January 2026 (13:30–16:00) and select one doctoral presentation to critique, applying the writing, publishing, and refereeing tips from the course to practice thesis-level analysis.
You must submit ONLY one file:
- One 2.5–3 page report in academic referee style, focusing on the presentation’s contribution to the literature and your judgment of its empirical strategy.
- Key tasks: summarise the presentation’s main ideas; evaluate novelty vs existing literature; assess empirical methods (identification, robustness); discuss implications, strengths, weaknesses, limitations, and improvements.
- Work in teams of up to 5 students.
- You’ll receive the PDF of the presentation and the working paper. Sometimes there is no working paper available; in that case base your report solely on the presentation.
- Grading: this report accounts for 50% of your grade — depth of analysis (contribution and empirical aspects), writing quality (concise, organised, skim-friendly), and originality of insights.
- Deadline: Submit your review as a PDF via email to oliver.padmaperuma@uni-ulm.de, with andre.guettler@uni-ulm.de in CC by 3 February 2026, including your name and the title of the chosen presentation in the document.
- 11 pt Times New Roman, 1.5 spaced.
Notes
The referee-report assignment is a structured exercise in evaluating someone else’s research the same way a journal editor would. A few practical pointers:
- Pick a presentation, not a paper. The brown-bag presentations are short (typically 30–45 minutes) — perfect for a 2.5-page report. If a working paper is available, read it for additional detail; if not, the slides + your seminar notes are the primary source.
- Take careful notes during the talk. You won’t be able to re-watch; capture the question, identification claim, headline result, robustness checks discussed, and the Q&A discussion (which often surfaces the major issues).
- Apply the four checklists from this lecture as your structuring device. Question + identification + data + econometrics + results — five short paragraphs cover the territory cleanly.
- Be specific. “The identification is weak” is a non-starter. “The DiD specification assumes parallel trends, but Figure 2 shows a pre-trend gap of 0.05 percentage points per quarter that is not addressed” is what a real referee report looks like.
- Be constructive. Even if you’d recommend rejection, your report should give the author a path forward — “the result would be more credible if X” rather than “the paper is unconvincing”.
- Standard length, not long. 2.5–3 pages forces you to prioritise the major issues. A 5-page report dilutes your strongest critique.
The grading rubric — depth, writing quality, originality of insights — rewards focused, defensible critique over comprehensive but shallow coverage. Pick the three or four issues that most matter and develop them carefully.
4.6 Conclusion of Lecture 4
- 4.1 Course objectives
- 4.2 What makes a good empirical paper?
- 4.3 The publication process
- 4.4 Referee reports
- 4.5 Discussion of Assignment II
- 4.6 Conclusion of Lecture 4
Course at a glance
Basics
Course objectives, schedule, assignments · Introduction to R · Live coding
- Course objectives, schedule and assignments
- Introduction to R and RStudio
- Live coding: variables, vectors, matrices, data frames, lists, functions, loops
- Data import and export
Data Handling & Visualization
API access, merging, cleansing, transforming and visualising financial data in R · Introduction to Overleaf
- API access (Nasdaq Data Link / Quandl, FRED, Yahoo, Coingecko, Polygon)
- Import and cleanse: read_csv, mutate, types
- Merge and append data (merge, bind_rows)
- Filter and mutate (dplyr): subset rows, derive variables
- Group by and summarise
- Pivot wide / long
- Data visualization with ggplot2 (six-step pipeline)
- Introduction to LaTeX and Overleaf
Statistical Analysis
Descriptive · inferential · modelling — applied in R
- Descriptive statistics in R
- Correlation matrix and Pearson correlation test
- t-Test and Wilcoxon test
- Shapiro-Wilk and Kolmogorov-Smirnov tests
- Linear regression with fixed effects
- Clustered standard errors
- Exporting regression tables with stargazer
- Discussion of Assignment I (Problem Set)
Academic Publishing & Refereeing
What makes a great empirical paper · publication process · how to write a referee report
- What makes a good empirical paper (contribution, identification, write-up)
- The publication process step by step
- Top finance and economics journals
- Bad outcome vs revise & resubmit
- Referee Reports — summary, major issues, minor issues
- Referee checklist (question, identification, data, econometrics, results)
- Discussion of Assignment II (Referee Report)
Brown Bag Seminar
Engage with doctoral research and prepare your referee report
- Doctoral research presentations
- Apply empirical / writing tips for the referee report
- Group discussion and Q&A
Further reading
- Angrist and Pischke (2009) — Mostly Harmless Econometrics — the canonical reference on identification.
- Cochrane (2005) — Writing Tips for Ph.D. Students — short, opinionated, indispensable.
- Rynes, Some Reflections on Contribution — on the course Moodle.
Notes
Two short pieces that compound in value over a research career:
- Cochrane’s Writing Tips (Cochrane 2005) is 30 pages and worth re-reading every six months. The actionable rules — “every paragraph should make one point”, “verbs over nouns”, “write the abstract first” — are deceptively simple but transform the readability of empirical-finance papers. Many of the writing habits that distinguish published-quality work from drafts trace to following these rules.
- Rynes on contribution is the meta-paper — what does it mean for an academic paper to make a contribution at all, and how do editors weigh it. Read it before you commit to a research project: the framework helps avoid the most common contribution-shaped mistakes.
For identification specifically, Mostly Harmless Econometrics (Angrist and Pischke 2009) is the single most-cited reference on this slide deck for a reason — every empirical finance paper draws on its framework explicitly or implicitly.
Prepare before next lecture
- Skim the working papers posted to Moodle for the Brown Bag Seminar.
- Re-read the four-checklist sections in this lecture before the seminar.
- Form your team of up to 5 if you haven’t already.
Notes
The next session is the brown-bag seminar itself — concrete preparation makes the difference between attending and engaging:
Skim the working papers in advance, even if you only spend 15 minutes per paper. Coming in cold means the talk starts already moving and you’re playing catch-up. Skimming gives you the question and approach so you can listen for the parts that matter (identification details, robustness checks, Q&A discussion).
Re-read the four-checklist sections so the evaluation framework is fresh. During a presentation, mentally tick the boxes: what’s the question? what’s the identification? does the data support the claim? where’s the weakness? Half-formed observations during the talk become the bullet points of your referee report later.
Form the team of 5 before the seminar so you can divide observations during the session — one person tracks identification claims, one tracks econometric specifications, one tracks framing — and consolidate notes immediately afterward. Coming in with a team that’s already organised saves a week of coordination later.
See you at the Brown Bag Seminar
- Brown Bag Seminar — 20 January 2026, 13:30–16:00.
- Take careful notes on the presentations.
- Pick one doctoral talk and apply the four-checklist sections from this lecture.
- Submit your 2.5–3 page report as a group of up to 5 by 3 February 2026.