Skip to content

Should we turn trace_only default to True in torchlib? #2812

@titaiwangms

Description

@titaiwangms

The PR: #2791 removed inliner pass from version converter and broke benchmark.

Some un-traced functions are revealed by benchmarking:

@torch_op(("aten::split", "aten::split.Tensor"))
def aten_split(self: TTensor, split_size: INT64, dim: int = 0) -> TTensor:
"""split.Tensor(Tensor(a -> *) self, SymInt split_size, int dim=0) -> Tensor(a)[]"""
return op.SplitToSequence(self, split_size, axis=dim)
def aten_split_copy(self: TensorType, split_size: INT64, dim: int = 0) -> TensorType:
"""split_copy.Tensor(Tensor self, SymInt split_size, int dim=0) -> Tensor[]"""
raise NotImplementedError()
@torch_op("aten::split_with_sizes")
def aten_split_with_sizes(self: TTensor, split_sizes: INT64, dim: int = 0) -> TTensor:
"""split_with_sizes(Tensor(a -> *) self, SymInt[] split_sizes, int dim=0) -> Tensor(a)[]"""
return op.SplitToSequence(self, split_sizes, axis=dim)

Although I will add back the inliner into version converter, should we trace all our functions ahead in torchlib? Or there is something blocking us?

cc @justinchuby @xadupre @gramalingam

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: torchlibRelated to the torch/aten function lib in development

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions