I know. That's where I got it.
And...
"In a survey, however, respondents sometimes provide untruthful answers. For instance, they may hide their dislike for Black candidates for fear of revealing their racism. To address this concern, we embedded in the survey a conjoint experiment that allowed us to elicit true voters’ preferences. We presented respondents with pairs of candidates who varied in their demographic traits. We looked at gender, race, sexual orientation, gender identity, health, religion, education, age, and political experience. And then we asked them to choose which candidate they would be more likely to support. Because we randomized these traits, we can estimate the effect of each candidate characteristic and the interaction of candidate characteristics (e.g. gender and race) on vote choice."
I don't understand how this gets around the tendency for some not to give truthful answers.
I can't speak to the methodology in this specific survey. But in general, a well-designed conjoint survey can weed out untruths and some biases. As a very simplified example, we might ask if people prefer salad or french fries as a side dish. Let's assume that we think there's a reasonable chance that people will say they prefer salad, but that they might actually prefer french fries - it might be that they're trying to appear more health-conscious than they actually are... but it might also believe that they really do prefer salad and are unaware of their own patterns and biases. So you give them several questions that pair salads, french fries, green beans, mashed potatoes, and yams with a variety of main dishes. Then you go through the responses in aggregate and find the patterns. It might turn out that even though some people claimed they prefer salads, they consistently selected meal combinations that included french fries over those that included salads.
The difficulty is that it can be difficult to design a conjoint survey that controls for all of the influencing factors. Good conjoint studies tend to be very long surveys... and long surveys are expensive - people don't take those kinds of surveys for free, and the longer it is the more you've got to pay people to spend time taking a survey, and the smaller the pool of people interested in taking it. So there's also a lot of risk of selection bias. It can be very difficult to ensure that you've got a valid sample that is large enough that it can be extrapolated to a population.
Conjoint surveys can be extremely good methods of collecting data that is often hard to quantify. But the methodology, sample structure, and design ends up being extremely important. It can be difficult to judge whether or not a conjoint study is well designed, or if it has inherent skew or bias that is unaccounted for.
A sample size of 2000 is a decent size, depending on how many conjoint pairs they evaluated. If it was on the order of 10 to 15 pairs, that's probably a good size, provided the randomization of demographic and policy elements presented was sufficiently random (harder than you might think). Without access to the demographic profile of the respondents, it's difficult to say for sure. It at least passes first muster for sample size, so it's probably not complete garbage