Dataset Viewer
Auto-converted to Parquet Duplicate
commitId
stringlengths
40
40
datetime
stringlengths
30
31
subject
stringlengths
37
266
comment
stringlengths
109
15.2k
diff
stringlengths
238
914k
gitVersion
stringclasses
9 values
16f53275378de95723b41dc23c0ec52ef54ae29
Thu, 11 Apr 2024 06:39:54 +0000
[PATCH 0001/1000] [AOTI] Serialize large weights (#123002)
But appending them to the end of the shared library and mmaping afterwards Disabled by default, but overridable by `config.aot_inductor.force_mmap_weights` Implemented by adding `USE_MMAP_SELF` define to `inductor/aoti_runtime/model.h` which is defined when weights are appended to the binary. In that case, shared libr...
diff --git a/test/inductor/test_aot_inductor.py b/test/inductor/test_aot_inductor.py index ea21e5f140..5de6d91a0b 100644 --- a/test/inductor/test_aot_inductor.py +++ b/test/inductor/test_aot_inductor.py @@ -269,6 +269,22 @@ class AOTInductorTestsTemplate: ) self.check_model(Model(), example_inputs) ...
2.41.0
aad72b0d3f2b03ae6d268b0c78a3cf349c0ae9f
Wed, 10 Apr 2024 18:05:40 -0700
[PATCH 0002/1000] Support all unsigned int sizes on unique (#123643)
Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/123643 Approved by: https://github.com/albanD, https://github.com/kit1980
diff --git a/aten/src/ATen/cuda/cub-RadixSortKeys.cu b/aten/src/ATen/cuda/cub-RadixSortKeys.cu index cf88c8aa0c..74e82ae55c 100644 --- a/aten/src/ATen/cuda/cub-RadixSortKeys.cu +++ b/aten/src/ATen/cuda/cub-RadixSortKeys.cu @@ -51,5 +51,8 @@ void radix_sort_keys( int64_t end_bit); AT_FORALL_SCALAR_TYPES_AND2(B...
2.41.0
2f687f32c3abddc0999733e26761a1f608029f3
Thu, 11 Apr 2024 06:53:10 +0000
[PATCH 0003/1000] Option to include stride and device annotation in gm.print_readable() (#123690)
Summary: Sample output for gm.print_readable(include_stride=True, include_device=True) ``` getitem_21: "i32[1200][1]cuda:0" = auto_functionalized_4[1] copy_2: "f32[2, 60][60, 1]cuda:1" = .... ``` Test Plan: CI Differential Revision: D55949129 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123690 Ap...
diff --git a/test/expect/TestFXAPIBackwardCompatibility.test_function_back_compat-fx_backcompat_function_signatures.expect b/test/expect/TestFXAPIBackwardCompatibility.test_function_back_compat-fx_backcompat_function_signatures.expect index d6630cff36..2996edd485 100644 --- a/test/expect/TestFXAPIBackwardCompatibility....
2.41.0
8d2504eece2ba5e464a42b253ea07f70e9ba5b6
Tue, 9 Apr 2024 12:11:09 -0700
[PATCH 0004/1000] [aot] always pass inputs to runtime_wrapper as list and add type annotations (#123630)
`runtime_wrapper` unpacking the arguments as a Tuple[arg] will prevent them from being freed within its scope. This is problematic if inductors wants to free those inputs, which could be activations in the compiled backwards case. This PR only changes the signature to pass as list, but does not clear it, keeping same r...
diff --git a/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py b/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py index dda3144b24..5c9c3424d3 100644 --- a/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py +++ b/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py @@ -1...
2.41.0
510afb8857e6565612862496a6478733fe7b8db
Wed, 10 Apr 2024 17:53:07 -0700
[PATCH 0005/1000] [aot] refactor runtime_wrapper's epilogue args access (#123674)
I want runtime_wrapper args to be stealable by call_func_at_runtime_with_args, since the args may contain activations which we don't want to hold alive in this scope. The args to runtime_wrapper **should always be** from a list created within aot_autograd, so it **should always be** safe to steal them: https://github....
diff --git a/torch/_functorch/_aot_autograd/runtime_wrappers.py b/torch/_functorch/_aot_autograd/runtime_wrappers.py index 1ef2df56a2..3d11c01fe9 100644 --- a/torch/_functorch/_aot_autograd/runtime_wrappers.py +++ b/torch/_functorch/_aot_autograd/runtime_wrappers.py @@ -72,8 +72,29 @@ def create_runtime_wrapper( i...
2.41.0
00282fecfcb53790aebfb24cc48a8703577778e
Wed, 10 Apr 2024 18:33:29 -0700
[PATCH 0006/1000] [c10d] make monitorThread sleep when we try to dump (#123788)
Summary: We seperated the FR dump logic from the desync debug logic, so we no longer set collectiveDebugInfoMode_ to true when we just need FR dump. That's why monitor thread did not sleep and try to kill the process without waiting for the dump. The fix is simple, we should sleep whenever shouldDump_ is true Test Pla...
diff --git a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp index d9f9e6e574..def79cde2b 100644 --- a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp +++ b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp @@ -1268,6 +1268,7 @@ void ProcessGroupNCCL::heartbeatMonitor...
2.41.0
ac99d539be35e806d8d719fa69ceddaf63c6373
Thu, 11 Apr 2024 08:56:02 +0000
[PATCH 0007/1000] Only initialize state if needed in SGD (#123757)
Fixes [T184381726](https://www.internalfb.com/intern/tasks/?t=184381726) Pull Request resolved: https://github.com/pytorch/pytorch/pull/123757 Approved by: https://github.com/janeyx99
diff --git a/test/inductor/test_compiled_optimizers.py b/test/inductor/test_compiled_optimizers.py index 7c93f326f4..d076f27b17 100644 --- a/test/inductor/test_compiled_optimizers.py +++ b/test/inductor/test_compiled_optimizers.py @@ -310,7 +310,8 @@ def make_recompile_test(optim_cls, closure=None, kernel_count=2, **kw...
2.41.0
b7741546b1ee53e5aa3768616c50eab72372a3a
Thu, 11 Apr 2024 09:02:31 +0000
[PATCH 0008/1000] Fixed arange decomp for float dtype (#121013)
## Description: - [x] Fixed arange decomp for float dtype - [x] Added a test ## Current state Arange graph and C++ generated code are not optimal when arange is created directly using float32 dtype: ```python import torch def func(x): s = x.shape[-1] a = torch.arange(s, dtype=torch.float32) return s + a c_func = t...
diff --git a/test/test_decomp.py b/test/test_decomp.py index 4e482a92d5..39d0c2eef2 100644 --- a/test/test_decomp.py +++ b/test/test_decomp.py @@ -38,6 +38,7 @@ from torch._ops import DispatchKey import itertools import functools from functools import partial +import re import unittest aten = torch.ops.aten @@ -...
2.41.0
798f5bf0d58fb9655c4da9c0a8bc1ec8af31aea
Wed, 10 Apr 2024 23:23:28 -0700
[PATCH 0009/1000] Add Quantization recipe filter per operator type for x86_inductor_quantizer (#122775)
**Summary** Default recipes are enabled in `X86InductorQuantizer` and request comes to customize recipes based on these defaults. - Avoid annotation propagation and restrict annotation only to annotate `conv`/`linear`. - Add `matmul` in the quantization recipes, noting that it's not a general recipe but tailored to m...
diff --git a/test/quantization/pt2e/test_x86inductor_quantizer.py b/test/quantization/pt2e/test_x86inductor_quantizer.py index 06e2e6c9f9..c9df319bfd 100644 --- a/test/quantization/pt2e/test_x86inductor_quantizer.py +++ b/test/quantization/pt2e/test_x86inductor_quantizer.py @@ -1346,3 +1346,105 @@ class TestQuantizePT2...
2.41.0
8e9261b906f69b397e4027362be801f98a68d62
Wed, 10 Apr 2024 23:23:28 -0700
[PATCH 0010/1000] Add Matmul recipe into x86_inductor_quantizer (#122776)
**Summary** Add `matmul` in the quantization recipes, noting that it's not a general recipe but tailored to meet accuracy criteria for specific models. `matmul` recipe is disabled by default. **Test Plan** ``` python -m pytest quantization/pt2e/test_x86inductor_quantizer.py -k test_attention_block ``` Pull Request re...
diff --git a/test/quantization/pt2e/test_x86inductor_quantizer.py b/test/quantization/pt2e/test_x86inductor_quantizer.py index c9df319bfd..4af5a30ddf 100644 --- a/test/quantization/pt2e/test_x86inductor_quantizer.py +++ b/test/quantization/pt2e/test_x86inductor_quantizer.py @@ -289,21 +289,42 @@ class TestHelperModules...
2.41.0
4580f76d9e4a81b70a94062b762e3af919d95d0
Wed, 10 Apr 2024 21:38:33 -0700
[PATCH 0011/1000] fix flop counter issue with out parameters (#123768)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123768 Approved by: https://github.com/zou3519
diff --git a/test/test_flop_counter.py b/test/test_flop_counter.py index 74bc666db6..1a9a757f9f 100644 --- a/test/test_flop_counter.py +++ b/test/test_flop_counter.py @@ -248,8 +248,8 @@ class TestFlopCounter(TestCase): self.assertExpectedInline(get_total_flops(mode), """5""") - def count(*args, out...
2.41.0
a5e7a01b5368b8ba11edcb62942630a1474e6e3
Wed, 10 Apr 2024 11:02:32 -0700
[PATCH 0015/1000] [custom_op] Schema inference now includes default values (#123453)
If the function has default values, we should be able to do schema inference and put the default values into the schema. Test Plan: - new tests Pull Request resolved: https://github.com/pytorch/pytorch/pull/123453 Approved by: https://github.com/albanD
diff --git a/test/test_custom_ops.py b/test/test_custom_ops.py index 7479225785..10cb60e8ae 100644 --- a/test/test_custom_ops.py +++ b/test/test_custom_ops.py @@ -688,20 +688,6 @@ class TestCustomOp(CustomOpTestCaseBase): infer_schema(foo) - with self.assertRaisesRegex(ValueError, "default value...
2.41.0
b4419dc4d9a4e5555de2a4def0eb77f10c8832a
Wed, 10 Apr 2024 11:02:32 -0700
[PATCH 0016/1000] Refresh OpOverloadPacket if a new OpOverload gets added (#123578)
If a user accesses an OpOverloadPacket, then creates a new OpOverload, then uses the OpOverloadPacket, the new OpOverload never gets hit. This is because OpOverloadPacket caches OpOverloads when it is constructed. This PR fixes the problem by "refreshing" the OpOverloadPacket if a new OpOverload gets constructed and t...
diff --git a/test/test_custom_ops.py b/test/test_custom_ops.py index 10cb60e8ae..86c21f228d 100644 --- a/test/test_custom_ops.py +++ b/test/test_custom_ops.py @@ -2393,6 +2393,30 @@ Please use `add.register_fake` to add an fake impl.""", y = f(x) self.assertEqual(y, x.sin()) + @skipIfTorchDynamo(...
2.41.0
38729c0cdf3ce4274f4d68f8e46e5a1cd36cbe8
Wed, 10 Apr 2024 11:02:33 -0700
[PATCH 0017/1000] Switch quantized_decomposed over to new custom ops API (#123454)
We are taking API feedback. Changes: - I removed some of the default values (they weren't being used). - I was unable to convert the last op (which is essentially an autograd.Function registered as CompositeImplicitAutograd). That one is "incorrectly registered"; I punt fixing it to the future. Test Plan: - existing t...
diff --git a/torch/_custom_op/impl.py b/torch/_custom_op/impl.py index fefd7cedf9..6f25e2b9af 100644 --- a/torch/_custom_op/impl.py +++ b/torch/_custom_op/impl.py @@ -882,6 +882,11 @@ SUPPORTED_RETURN_TYPES = { def parse_return(annotation, error_fn): + if annotation == inspect.Signature.empty: + error_fn...
2.41.0
34e56fa3352aefa208b33b0a86aaabed8033f7a
Wed, 10 Apr 2024 15:10:59 -0700
[PATCH 0018/1000] inductor: log unique id to match output_code to aot graphs (#118647)
I found it helpful to be able to see, given some inductor output code, which AOT graph it came from. When you have large models with multiple graphs floating around this can be difficult, so I added the aot_config.aot_id to the printed inductor output. Pull Request resolved: https://github.com/pytorch/pytorch/pull/118...
diff --git a/torch/_functorch/_aot_autograd/logging_utils.py b/torch/_functorch/_aot_autograd/logging_utils.py index 28f82555ac..414166cbdd 100644 --- a/torch/_functorch/_aot_autograd/logging_utils.py +++ b/torch/_functorch/_aot_autograd/logging_utils.py @@ -46,12 +46,22 @@ def track_graph_compiling(aot_config, graph_n...
2.41.0
83900887f2fb5c7a04e7fd78ad8de7a20f356d4
Wed, 10 Apr 2024 14:19:07 -0700
[PATCH 0019/1000] [quant] Enable backward for choose_qparams_per_token_asymmetric (#123452)
Summary: When running the backward for this op, we get the error: ``` RuntimeError: derivative for aten::aminmax is not implemented ``` This commit replaces this call with separate amin and amax calls instead, which do have implemented derivatives. Test Plan: python test/test_quantization.py -k test_decomposed_choose_...
diff --git a/test/quantization/core/test_quantized_tensor.py b/test/quantization/core/test_quantized_tensor.py index b2bd97bdc3..228f1f8ee7 100644 --- a/test/quantization/core/test_quantized_tensor.py +++ b/test/quantization/core/test_quantized_tensor.py @@ -1602,6 +1602,14 @@ class TestQuantizedTensor(TestCase): ...
2.41.0
fa36ef09210b67022439b49eee01d7b63bd6d96
Wed, 10 Apr 2024 19:31:01 -0400
[PATCH 0020/1000] Natively support int truncation, don't guard on positive/negative (#122827)
This doesn't entirely fix the original problem that prompted this, but it seems to just be getting stuck in export constraint formatting now which seems like progress to me. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/122827 Approved by: https://githu...
diff --git a/test/export/test_export.py b/test/export/test_export.py index 80a6f0b993..d04ab18384 100644 --- a/test/export/test_export.py +++ b/test/export/test_export.py @@ -1086,6 +1086,36 @@ class TestExport(TestCase): inps = (torch.ones(6, 4), torch.tensor(5), torch.tensor(4)) self._test_export_sa...
2.41.0
02374cc091e549c586b72c9b252d33256ec921e
Thu, 11 Apr 2024 17:34:47 +0000
[PATCH 0023/1000] [CI] show doc coverage repro instructions (#123688)MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit
remind devs they can reproduce the doc coverage error locally with following msg ```You can reproduce locally by running 'cd pytorch/docs && make coverage && cat build/coverage/python.txt'``` I spent 20min to figure out how to test locally so want to enrich the error msg <img width="542" alt="Screenshot 2024-04-09 at ...
diff --git a/.ci/pytorch/python_doc_push_script.sh b/.ci/pytorch/python_doc_push_script.sh index ce14ac1d02..d4076d3469 100755 --- a/.ci/pytorch/python_doc_push_script.sh +++ b/.ci/pytorch/python_doc_push_script.sh @@ -105,6 +105,7 @@ if [ "$is_main_doc" = true ]; then echo undocumented objects found: cat bui...
2.41.0
9c565b24e6c305c09c8c908e27f4023f41dd567
Wed, 10 Apr 2024 18:54:51 -0700
[PATCH 0024/1000] [inductor] Write generated files from parent process (#123409)
Before this PR we would pass generated source code over a pipe to the compile worker then the compile worker would write out the file. Doing it this way is faster and results in smaller messages to the workers (and lets us skip creating the workers in the warm start case). Pull Request resolved: https://github.com/py...
diff --git a/torch/_inductor/codecache.py b/torch/_inductor/codecache.py index 98cf75fc23..4e84838504 100644 --- a/torch/_inductor/codecache.py +++ b/torch/_inductor/codecache.py @@ -59,12 +59,7 @@ from torch._dynamo.device_interface import ( from torch._dynamo.utils import counters, dynamo_timed from torch._inductor...
2.41.0
c451798cc5a7882e95b01600aa643b042b11b1e
Wed, 10 Apr 2024 12:50:21 -0700
[PATCH 0025/1000] [inductor] Disable channels_last heuristic when channels==1 (#123758)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123758 Approved by: https://github.com/shunting314
diff --git a/test/inductor/test_cpu_repro.py b/test/inductor/test_cpu_repro.py index 9cc0e9b93a..80a0fed789 100644 --- a/test/inductor/test_cpu_repro.py +++ b/test/inductor/test_cpu_repro.py @@ -1630,6 +1630,19 @@ class CPUReproTests(TestCase): self.common(fn, (value, mask)) ...
2.41.0
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
6