Here I will discuss a sample image analysis project that takes you through tissue detection, tissue classification, cell detection and classification, and exporting results. The demo is based on sample brightfield sub-images from CMU-1.svs, the same image used in the demo projects and referenced on the Intro to QuPath Scripting page. There is a LOT of information here - Pete Bankhead’s videos on his YouTube channel may be the faster way to get an introduction to the software!
The project was created according to the steps shown in the official documentation (create folder, drag and drop images).

A Zipped file containing a sample project you can use to test and follow along can be found in the same place as the scripting demo. In the “Brightfield demo” project, you can find the training images and training annotations, along with the full scripts for the analysis in Automate->Project scripts…

Starting a project
 
 

For most workflows there will be several major steps:

  1. Define the problem - choose a final output measurement first.

  2. As long as your images are consistently taken (same lighting conditions, same staining procedures), calculate the stain vectors to be used for the project.

  3. Determine the area of interest, and whether it can be obtained simply with a thresholder.

  4. Perform the segmentation needed to acquire the final data - often a cell count or in some cases further pixel classification.

  5. Classify objects

    1. Possibly add new measurements if default measurements are insufficient

  6. Perform spatial analysis or other more complex analysis through scripting

  7. Export the data for the project

This guide is certainly not the only way to run a brightfield analysis - and it contains many opinions that not everyone might agree with or might not apply to your specific project - but hopefully it includes enough suggestions and good practices to get a few people started. Validation is always important - running a bunch of different settings on your full data set, finding one that gives the results you want, and then calling that your analysis is the sort of thing that leads to retractions in the future. Not how good science works - in fact, running through your full data set repeatedly is more like a retrospective study, as the images have already been collected and you can run whatever analysis on them you want.