The following video demonstrates the use of pixel classifiers to automatically define regions of interest that cannot be easily generated using a simple single channel and threshold. Make sure the video settings are HD to see all of the text!

Pixel Classifiers

When attempting to identify areas based on staining types and intensities, simple thresholding often falls short due to variations in staining. This is where the pixel classifier comes in. Although it's not a deep learning tool that recognizes objects in context – each pixel is still treated independently – smoothing allows it to consider adjacent pixels. By classifying collections of pixels similarly, we can delineate boundaries to create annotations or detections, akin to thresholding techniques.

The key distinction between pixel classifiers and thresholding lies in the requirement for training areas. These areas must be balanced both across and within classes. A common oversight, as seen in many forum examples, is the unbalanced selection of training areas. Large, blob-like training areas tend to focus on internal pixels, neglecting the scientifically crucial edges. For optimal performance, balance is essential, especially in areas like tumor analysis where both edge and center are critical. Lines or polylines often yield better results for training, producing higher-quality pixel classifiers.

As a best practice, either duplicate your project folder or start a separate project for generating and storing training data. Training within a single project using duplicated images or a composite training image is feasible, but risky. A single error with the 'Run for Project' command could erase all your training data, necessitating a restart if you need to adjust your classifier for a new image set or rectify issues discovered upon later data review.

Interface

Classifier: This dropdown allows the user to select the algorithm used for pixel classification. 'Random trees (RTrees)' suggests an ensemble learning method based on decision trees that is used for classification. Artificial Neural Network is another useful option, though modifying it through the Edit button can get somewhat complicated and stressful depending on your computer hardware. The other two options, Logistic Regression and K-Nearest Neighbors I have found little use for. The video demonstrates the use of the Calculate variable importance checkbox within Random trees to establish which measurements are most important for your pixel classifier.

Resolution: This setting specifies the spatial resolution at which the pixel classifier will operate, denoted here as 'Moderate (2.60 µm/px)'. It defines the physical size each pixel represents, which can impact the granularity of the analysis. The resolution is particularly important, as the 5 “Scales” in the Features list are all based off of this initial resolution. In general, each level below “Full” is 2x downsample. So Moderate, being 4 levels below Full, is 8x downsample. That means the smallest block of pixels that will be classified would be 8x8, and you cannot get finer details than that.

symbol Tradeoffs in Resolution

You will get higher resolution/finer details when selecting Full, but can be much slower to run. On the other hand, Low resolution pixel classifiers can be run/tested/iterated through quickly, but will generate very blocky structures. I generally recommend starting with a low to moderate resolution, then stepping closer to Full as you get used to the amount of time it will require to use those resolutions.

One additional consideration is that you are currently limited to 4 "downsamples" of distance away from the central pixel in terms of using information about surrounding pixel features. That means if you are interested in broad textures across large areas, you will need to use a lower base resolution to start with.

Features: This dropdown menu lets the user select the set of features that the pixel classifier will use to make its decisions. 'Default multiscale features' is the only option here for multiplex images, but clicking on the Edit button allows you to choose Channels, Scales, Features, and some degree of normalization. While you cannot currently save and load Feature sets, you can right click on the drop down bars to quickly Select all or Select none.

symbol Tradeoffs in Features

The more features you have, the more training data you need. Usually for pixel classifiers this is less of an issue than with cell classifiers.
Additionally, the more features you have and the more training data you have, the slower your pixel classifier will run, and the more memory it will take. That makes figuring out which features/channels/resolutions are not necessary for your classifier to run very useful in some cases. If you have very small images or few channels, it may not be worth the time to optimize.

Output: The user can choose the type of output they want from the pixel classifier. 'Classification' implies that the output will be categorical labels for each pixel. The other option, 'Probability', displays the likelihood of each pixel belonging to a particular class.

Region: This setting determines where in the image the pixel classifier should be applied. 'Everywhere' suggests that the classifier will analyze all pixels across the entire image. Other options might allow the user to specify particular regions of interest for classification.

I strongly recommend looking over the previous post on pixel classifiers in brightfield, as that goes into far more detail while not relying too heavily on anything specific to brightfield analysis.

Buttons

Load training: Load training objects (annotations) from other images, and their respective pixel information. It is strongly advised that you use this when training a classifier for a project, as single image classifiers tend to fail. Much as you want a broad mix of pixel types when drawing annotations within an image, you also want as much breadth as possible in the selection of images used for training data. Including the brightest and dimmest images can be particularly helpful.

Advanced options: Covered in more detail in the brightfield pixel classifier link above.

  1. Reweight samples: checkbox can help when you have too little training data for a particular class, though it will NOT simply fix accuracy in unseen data, that requires training variance.

  2. Maximum samples: Maximum number of of pixels for training - can be increased for more complexity and size in the training data, at the cost of speed. May be worth it for accuracy.

  3. I haven't found PCA for selecting features to improve accuracy much, but it can improve speed by reducing the number of features that need to be calculated, if, for example, you selected everything.

  4. For the other Advanced options, ask about them on the forum if you have questions. 

Live prediction: The button you press when you want to see your results. Requires the “C” button that turns on and off the classifier overlay to be toggled on, and for the overlay slider to not be set to completely transparent. Drawing while having this toggled on will cause the entire pixel classifier to update the moment you release/finalize a new annotation - this can be very annoying and very slow when training large pixel classifiers. I recommend turning it off when adding new annotations to your pixel classifier training data. It can be helpful to take a screenshot or snip of the overlay at 50% transparency before drawing multiple new regions.

Classifier name: Type a name where it says Enter name and click Save. Save will not be available unless you have a Project open. Once you save, a file will show up within the project directory in: Project/classifiers/pixel_classifiers/name.json

Measure, Create objects, and Classify: These were covered in detail within the brightfield pixel classifier discussion.

Display

The display section on the right hand side of the pixel classifier interface is locked to the Resolution selected - for example choosing Extremely low will give you a very zoomed out view, while Full will display a view where the original pixels are visible. As shown in the .gif to the right, mousing over various parts of the display area does not move it, it follows where the mouse paths across the image.

The bar below the display area allows you to visualize the various filters selected in the Features section, or display the classification overlay. The slider controls the transparency of the classification or feature selected, and the two numbers indicate the minimum and maximum intensity values of the display (for the features).

Demonstrating how the mouse interacts with the display, and how the transparency slider affects what you see. The “C” classifier overlay is active for almost all of this .gif, turning it off would leave the original pixels.

symbol Tip

If you want to look at a particular location within your image using the display within the pixel classifier window, you can hold down Shift when the mouse is centered over the area of interest, and then move it back over into the Pixel classifier dialog box. Holding Shift will prevent the display from updating as the mouse moves. Release shift once the mouse is back over the pixel classifier window.
You can then modify other settings in the dropdown below the display to see how the various filters look in the chosen area.