github.com/kubeflow/training-operator@v1.7.0/docs/roadmap.md (about)

     1  # Roadmap
     2  
     3  ## 2022
     4  
     5  * Release training-operator v1.4 to be included in Kubeflow v1.5 release.
     6  * Migrate v2 MPI operator to unified operator.
     7  * Migrate PaddlePaddle operator to unified operator.
     8  * Support elastic training for additional frameworks besides PyTorch.
     9  * Support different gang scheduling definitions.
    10  * Improve test coverage.
    11  
    12  
    13  ## 2020 and 2021
    14  
    15  ### Maintenance and reliability
    16  
    17  We will continue developing capabilities for better reliability, scaling, and maintenance of production distributed training experiences provided by operators.
    18  
    19  * Enhance maintainability of operator common module. Related issue: [#54](https://github.com/kubeflow/common/issues/54).
    20  * Migrate operators to use [kubeflow/common](https://github.com/kubeflow/common) APIs. Related issue: [#64](https://github.com/kubeflow/common/issues/64).
    21  * Graduate MPI Operator, MXNet Operator and XGBoost Operator to v1. Related issue: [#65](https://github.com/kubeflow/common/issues/65).
    22  
    23  ### Features
    24  
    25  To take advantages of other capabilities of job scheduler components, operators will expose more APIs for advanced scheduling. More features will be added to simplify usage like dynamic volume supports and git ops experiences. In order to make it easily used in the Kubeflow ecosystem, we can add more launcher KFP components for adoption.
    26  
    27  * Support dynamic volume provisioning for distributed training jobs. Related issue: [#19](https://github.com/kubeflow/common/issues/19).
    28  * MLOps - Allow user to submit jobs using Git repo without building container images. Related issue: [#66](https://github.com/kubeflow/common/issues/66).
    29  * Add Job priority and Queue in SchedulingPolicy for advanced scheduling in common operator. Related issue: [#46](https://github.com/kubeflow/common/issues/46).
    30  * Add pipeline launcher components for different training jobs. Related issue: [pipeline#3445](https://github.com/kubeflow/pipelines/issues/3445).
    31  
    32  ### Monitoring
    33  
    34  * Provides a standardized logging interface. Related issue: [#60](https://github.com/kubeflow/common/issues/60).
    35  * Expose generic prometheus metrics in common operators. Related issue: [#22](https://github.com/kubeflow/common/issues/22).
    36  * Centralized Job Dashboard for training jobs (Add metadata graph, model artifacts later). Related issue: [#67](https://github.com/kubeflow/common/issues/67).
    37  
    38  ### Performance
    39  
    40  Continue to optimize reconciler performance and reduce latency to take actions on CR events.
    41  
    42  * Performance optimization for 500 concurrent jobs and large scale completed jobs. Related issues: [#68](https://github.com/kubeflow/common/issues/68), [tf-operator#965](https://github.com/kubeflow/tf-operator/issues/965), and [tf-operator#1079](https://github.com/kubeflow/tf-operator/issues/1079).
    43  
    44  ### Quarterly Goals
    45  
    46  #### Q1 & Q2
    47  
    48  - Better log support
    49    - Support log levels [#1132](https://github.com/kubeflow/training-operator/issues/1132)
    50    - Log errors in events
    51  - Validating webhook [#1016](https://github.com/kubeflow/training-operator/issues/1016)
    52  
    53  #### Q3 & Q4
    54  
    55  - Better Volcano support
    56    - Support queue [#916](https://github.com/kubeflow/training-operator/issues/916)