Problem 1:  Error Report: 'Tolerance: 8.2031250e-07' During validation phase 1.
Answer 1: This is caused by the inconsistent  origin and spacing parameters between the prediction results and validation input CTs. Please ensure your saved prediction results share the same origin and spacing parameters with that of input CTs.
Problem 2: Some teams do not submit the largest component results.
Answer 2: We will download the prediction files to check it. In addition, we will evaluate the TD and BD metrics after the largest component extraction, as well as in the test phase. These results will be announced at the MICCAI 2022 Challenge Satellite Events, 22/09/2022.
Problem 3: Error Report: 'evalutils.exceptions.ValidationError: We expected to find 50, but we found 47. Please correct the number of predictions.'
Answer 3: This may be happens in the procedure of saving the predictions, which may be fixed by replacing the I/O functions by the followings, pay attention to that the metadata (origin and spacing) should be the same as the original input CTs:
import numpy as np
import SimpleITK as sitk
# Load the NIFTI Files:
def load_itk_image(filename):
    itkimage = sitk.ReadImage(filename)
    numpyImage = sitk.GetArrayFromImage(itkimage)
    numpyOrigin = np.array(list(reversed(itkimage.GetOrigin())))
    numpySpacing = np.array(list(reversed(itkimage.GetSpacing())))
    return numpyImage, numpyOrigin, numpySpacing
# Save the NIFTI Files:
def save_itk(image: np.ndarray, filename: str, origin: np.ndarray = [0.0, 0.0, 0.0],
             spacing: np.ndarray = [1.0, 1.0, 1.0]) -> None:
    if type(origin) != tuple:
        if type(origin) == list:
            origin = tuple(reversed(origin))
        else:
            origin = tuple(reversed(origin.tolist()))
    if type(spacing) != tuple:
        if type(spacing) == list:
            spacing = tuple(reversed(spacing))
        else:
            spacing = tuple(reversed(spacing.tolist()))
    itkimage = sitk.GetImageFromArray(image, isVector=False)
    itkimage.SetSpacing(spacing)
    itkimage.SetOrigin(origin)
    sitk.WriteImage(itkimage, filename, True)

Q1: The metrics used in the validation phase 1.
A1:  The metrics used in the validation phase 1 is the DSC, FN error, and FP error, which are defined as : Introducing Dice, Jaccard, and Other Label Overlap

Measures To ITK

The rank is based on the average rank of these metrics. However, this is not the final evaluation, the final evaluations are TD, BD, DSC, and Precision, which are presented in the Evaluation Guideline.  In other words, temporary rankings are based on the overlap-wise measures. We also tried to deploy the docker methods regarding to the TD / BD on the grand-challenge.org, however it failed many times due to the restriction of the memory use. Hence, we will test the above metrics for participants who successfully finish this challenge on our local servers (both in the validation phase and test phase, then re-ranking them!)
The participants who successfully take participate in the final test stage of the challenge, we will give a comprehensive evaluation results for them. (along some other metrics excluded in the ranks, e.g., Sensitivity, Specificity ...) The final result report will be announced on 22/09/2022. Wish to see you in the MICCAI Challenge Satellite Event!
[Update]: Thanks to the Grand-Challenge.org, we have together fix the problem of out of memory in the TD / BD, we will lanuch a live leaderboard of validation

phase for yours to test your submission on TD, BD, DSC, and Precision. Each team will have three chances to submit, the submission procedure is the same as the former validation phase 1.

Q2: About the Docker time
A2: We have tested our baseline and 6 hours are enough for 150 images prediciton. However, consider the participants' request, we extend the docker running hours to 12 hours preferably. We modify the docker running time rule to : "The docker should execute preferably no more than 6 12 hours and occupy preferably no more than 16 GB GPU Memory (must smaller than 24 GB [a single GTX 3090])  to generate the segmentation test results. " In addition, we will send all of the participants an official e-mail about the test phase on 18/08/2022.
However, we will calculate the inference time and GPU memory as efficiency reference, which will be reported on our challenge satellite event, 22/09/2022.
We will take care of your submitted docker, and maybe the efficiency of the airway tree modeling is one of the future direction we wanna explore.